Article

Sensory substitution and the human–machine interface

Authors:
Article

Sensory substitution and the human–machine interface

If you want to read the PDF, try requesting it from the authors.

Abstract

Recent advances in the instrumentation technology of sensory substitution have presented new opportunities to develop systems for compensation of sensory loss. In sensory substitution (e.g. of sight or vestibular function), information from an artificial receptor is coupled to the brain via a human-machine interface. The brain is able to use this information in place of that usually transmitted from an intact sense organ. Both auditory and tactile systems show promise for practical sensory substitution interface sites. This research provides experimental tools for examining brain plasticity and has implications for perceptual and cognition studies more generally.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... e transmission of human body movement signals to other devices through wearable smart bracelets [1] has attracted increasing attention in the field of human-machine interface(HMI) [2]. However, considering the fact that the data collection range of the smart bracelet is limited to a small portion of the wrist, it is necessary to simplify the acquired signal of the bracelet such that the data collection range of the bracelet can be expanded to the limbs and the signal processing accuracy can be improved. ...
... Wrist flexion (1) Hand close (19) Hand close with wrist flexion (7) Wrist flexion (1) Hand open (20) Hand open with wrist flexion (13) Wrist extension (2) Hand close (19) Hand close with wrist extension (8) Wrist extension (2) Hand open (20) Hand open with wrist extension (14) Wrist radial (3) Hand close (19) Hand close with wrist radial (9) Wrist radial (3) Hand open (20) Hand open with wrist radial (15) Wrist ulnar (4) Hand close (19) Hand close with wrist ulnar (10) Wrist ulnar (4) Hand open (20) Hand open with wrist ulnar (16) Wrist pronation (5) Hand close (19) Hand close with wrist pronation (11) Wrist pronation (5) Hand open (20) Hand open with wrist pronation (17) Wrist supination (6) Hand close (19) Hand close with wrist supination (12) Wrist supination (6) Hand open (20) Hand open with wrist supination (18) Figure 5(a). In contrast, during hand opening, the data fitting error of the anterior muscle group was relatively small, while that of the posterior muscle group was relatively large, as shown in Figure 5(b). ...
... Wrist flexion (1) Hand close (19) Hand close with wrist flexion (7) Wrist flexion (1) Hand open (20) Hand open with wrist flexion (13) Wrist extension (2) Hand close (19) Hand close with wrist extension (8) Wrist extension (2) Hand open (20) Hand open with wrist extension (14) Wrist radial (3) Hand close (19) Hand close with wrist radial (9) Wrist radial (3) Hand open (20) Hand open with wrist radial (15) Wrist ulnar (4) Hand close (19) Hand close with wrist ulnar (10) Wrist ulnar (4) Hand open (20) Hand open with wrist ulnar (16) Wrist pronation (5) Hand close (19) Hand close with wrist pronation (11) Wrist pronation (5) Hand open (20) Hand open with wrist pronation (17) Wrist supination (6) Hand close (19) Hand close with wrist supination (12) Wrist supination (6) Hand open (20) Hand open with wrist supination (18) Figure 5(a). In contrast, during hand opening, the data fitting error of the anterior muscle group was relatively small, while that of the posterior muscle group was relatively large, as shown in Figure 5(b). ...
Article
Full-text available
The transmission of human body movement signals to other devices through wearable smart bracelets has attracted increasing attention in the field of human-machine interfaces. However, owing to the limited data collection range of wearable bracelets, it is necessary to study the relationship between the superposition of the wrist and fingers and their cooperative motions to simplify the data collection system of such devices. Multichannel high-density surface electromyogram (HD-sEMG) signals exhibit high spatial resolutions, and they can help improve the accuracy of the multichannel fitting. In this study, we quantified the HD-sEMG forearm spatial activation features of 256 channels of hand movement and performed a linear fitting of the data obtained for finger and wrist movements in order to verify the linear superposition relationship between the cooperative and independent movements of the wrist and fingers. This study aims to classify and predict the results of the fitting and measured fingers and wrist cooperative actions using four commonly adopted classifiers and evaluate the performance of the classifiers in gesture fitting. The results indicated that linear discriminant analysis affords the highest classification performance, whereas the random forest method achieved the worst performance. This study can serve as a guide for gesture signal simplification in the future.
... The glabrous skin of the human hand is innervated by 12 diverse types of afferent fibres, which are responsible for perceiving pain, thermal, kinesthetic, and tactile (e.g., form, texture, skin's motion, motion of exogenous objects, pressure) sensations [1]. This haptic information is considered crucial for facilitating an effective humanmachine/computer interaction [2], [3], [4], [5], [6]. Indeed, haptic information is needed in human-machine/computer-interaction systems where the user may operate virtual or robotic hands, or simply interact with virtual environments or physical objects in distance. ...
... Electrotactile feedback has been successfully implemented in human-computer [9], [21] and human-machine interactions [2], [22] for various hand-based applications such as prosthetics [22], virtual reality [21], robotic teleoperation [23], [24], transparent haptic displays [25], [26], and augmented haptics [27], [28]. Thanks to the reduced form factor and price, electrotactile feedback is particularly promising for everyday applications, such as gaming and entertainment, as well as Augmented Reality (AR), where users need to interact with real and virtual objects at the same time [4], [14]. ...
... Nevertheless, these encouraging results were not limited to the stimulation of the fingers. Another area of interest on the hand, for facilitating human-machine/computer interactions, is the palm [2], [4], [9], [11], [60]. However, the skin of the palm folds and stretches in various ways especially in its central area [72], which may affect the impedance of electrical stimulation. ...
Article
Haptic feedback is critical in a broad range of human-machine/computer-interaction applications. However, the high cost and low portability/wearability of haptic devices remain unresolved issues, severely limiting the adoption of this otherwise promising technology. Electrotactile interfaces have the advantage of being more portable and wearable due to their reduced actuators’ size, as well as their lower power consumption and manufacturing cost. The applications of electrotactile feedback have been explored in human-computer interaction and human-machine-interaction for facilitating hand-based interactions in applications such as prosthetics, virtual reality, robotic teleoperation, surface haptics, portable devices, and rehabilitation. This paper presents a technological overview of electrotactile feedback, as well a systematic review and meta-analysis of its applications for hand-based interactions. We discuss the different electrotactile systems according to the type of application. We also discuss over a quantitative congregation of the findings, to offer a high-level overview into the state-of-art and suggest future directions. Electrotactile feedback systems showed increased portability/wearability, and they were successful in rendering and/or augmenting most tactile sensations, eliciting perceptual processes, and improving performance in many scenarios. However, knowledge gaps (e.g., embodiment), technical (e.g., recurrent calibration, electrodes’ durability) and methodological (e.g., sample size) drawbacks were detected, which should be addressed in future studies.
... In the majorities of modern studies on tactilevisual sensory substitution (TVSS), images are acquired by a camera and then converted either in vibrating patterns or direct electrical stimulation of the skin, applied on one or more parts of the body, such as back, forehead, fingertips, or event tongue.(Bach-y-Rita & W. Kercel, 2003;Dublon et al., 2012) Visual information arrives at perceptual levels for analysis and interpretation through somatosensory pathways and structures. Following the training phase, individuals report to be able to experience images in the surrounding space, rather than on skin, in spite of the fact that the stimulus reaches the cognitive processes through skin itself. ...
... Furthermore, he openly stated that to actively consider sensory substitution as a pathway towards the creation of a new sense, rather than a mere transposition, the user has to control the movements and actions of the camera. (Bach-y-Rita & W. Kercel, 2003) Nevertheless, such consideration solves only one half of the problem. Even in the case of a machine controlled by the user, on a cognitive level the standard process of sensory substitution is directly transferring the process of abstraction from one sense to the other, therefore not generating a new set of reference categories but shifting the already existing ones (visual) to be interpreted by another sense, without promoting any further or different elaboration on them. ...
... Finally, as already exemplified by the exploration of sensory substitution systems, for an active process of perception and creation of meaning, it is essential that the individual has direct control of the artificial perceptual system. (Bach-y-Rita & W. Kercel, 2003) This involves the cancellation of the separation between effectors and controls, which in turn will become a single system in symbiosis. The controls will no longer be a component of the mechanical system; their functions will be incorporated into the motor functions of the individual himself, already used for the control of the other sensory modalities. ...
Thesis
Full-text available
The role that the human sensory system plays in the daily interaction of individuals with the external world has far more articulated ramifications than expected from a component of human nature that appears to be so linear and straightforward. Starting from the conditions of sensory impairment as a vehicle for the analysis of the field, the multidisciplinary study of the perceptual process highlights how sight plays a dominant role compared to the other senses, and how this dominance fits into a context of dichotomies inherent in the today’s social structure, with negative impacts on the personal, psychological and social context of a part of humanity. The technical-scientific developments of the last decades, and in particular the innovations aimed at the enhancement of the human being through technology, have proved not only to be an instrument of emancipation from these hierarchical dictates, but also a space of opportunity for a reconsideration of design and research constraints, constraints generated by a bounded consideration of the sensory spectrum. The objective of the thesis in question consists in the development of an intervention in the field of visual impairments capable of suggesting an alternative to these principles through the intersection between human and technological elements in the restructuring of the perceptual approach and the analysis of the potential of the discipline of design in the exploration of alternative intervention methods. This goal will be achieved through the development of a support device for individuals with visual impairments. This device will allow the creation of a sensorial space alternative to the vision-centric standard, through the synesthetic interaction between an artificial vision system and natural systems of visual and haptic perception. In the course of the research path, the design process will be positioned in a role complementary to its usual connotation, which sees it as a tool for developing solutions and products dedicated to the specific area of intervention. It will also play a role as a creative force for new exploratory spaces where the design itself can have a functional, social and political value; spaces created in the intersection of science, speculation and interaction between the natural components and the artificial ones of organisms.
... A failure or disease of a sensory organ can occur not only at birth, but also during the growing process. This can lead to considerable problems in everyday life, leading to neurodegenerative diseases and depression (Bach-y Rita and Kercel, 2003). Thanks to scientific and technological progress, it is nowadays possible to compensate or treat a damaged or diseased sensory organ by using sensor substitution (Bach-y Rita and Kercel, 2003). ...
... This can lead to considerable problems in everyday life, leading to neurodegenerative diseases and depression (Bach-y Rita and Kercel, 2003). Thanks to scientific and technological progress, it is nowadays possible to compensate or treat a damaged or diseased sensory organ by using sensor substitution (Bach-y Rita and Kercel, 2003). A failure of an organ does not affect the whole sense, but only the part which is responsible for the transmission of the signal to the brain, as it is commonly the case by the retina in the eye, or the Cochlea in the ear (Bach-y Rita and Kercel, 2003). ...
... Thanks to scientific and technological progress, it is nowadays possible to compensate or treat a damaged or diseased sensory organ by using sensor substitution (Bach-y Rita and Kercel, 2003). A failure of an organ does not affect the whole sense, but only the part which is responsible for the transmission of the signal to the brain, as it is commonly the case by the retina in the eye, or the Cochlea in the ear (Bach-y Rita and Kercel, 2003). Besides, by sensor substitution there is a possibility to replace a failed sensor (sense) with another one, for example seeing by hearing, seeing by feeling and hearing (Deroy and Auvrey, 2012), hearing by seeing and reading or hearing by feeling or touching (Cieśla et al., 2019). ...
Conference Paper
Full-text available
Hearing-impaired people are exposed to greater dangers in everyday life, due to the fact that they are not able to perceive danger and warning signals. This paper addresses this problem by developing an application, that could help by classifying and detecting the direction of ambient sounds using Microsoft HoloLens 2 devices. The developed application implements a client-server architecture. The server-side REST-API supports not only the classification of sounds from audio files via deep-learning methods, but also allows the results of the sound source localization to be saved and read. The sound source localization is performed by a Maix Bit microcontroller with a 6-channel microphone array. For the user integration and interaction with the application, a 3D scene has been designed using Unity and the Mixed Reality Toolkit (MRTK). The implemented applica- tion showcases how classification and direction detection of ambient sounds could be done on the Microsoft HoloLens to support hearing-impaired people
... The basic assumption of SS is that the function of a missing or impaired sensory modality can be replaced by stimulating another sensory modality using the missing information. This only works because the brain is so plastic that it learns to associate the new stimuli with the missing modality, as long as they fundamentally share the same characteristics [30]. Surgical intervention would not be necessary because existing modalities or sensory organs can be used instead. ...
... In general, brain plasticity describes the "adaptive capacities of the central nervous system" and "its ability to modify its own structural organization and functioning" [30]. While neuroscience has long assumed a fixed assignment of certain sensory and motor functions to specific areas of the brain, we today know that the brain is capable of reorganising itself, e.g., after brain damage [51] and, moreover, is capable of learning new sensory stimuli not only in early development but throughout life [52]. ...
... Despite the long tradition of research on the topic of SS and numerous publications with promising results, the concept has not yet achieved a real breakthrough. The exact reasons for the low number of available devices and users have often been discussed and made the subject of proposals for improvement [30,40,44,55,56]. ...
Article
Full-text available
This paper documents the design, implementation and evaluation of the Unfolding Space Glove – an open source sensory substitution device. It transmits the relative position and distance of nearby objects as vibratory stimuli to the back of the hand and thus enables blind people to haptically explore the depth of their surrounding space, assisting with navigation tasks such as object recognition and wayfinding. The prototype requires no external hardware, is highly portable, operates in all lighting conditions, and provides continuous and immediate feedback – all while being visually unobtrusive. Both blind (n = 8) and blindfolded sighted participants (n = 6) completed structured training and obstacle courses with both the prototype and a white long cane to allow performance comparisons to be drawn between them. The subjects quickly learned how to use the glove and successfully completed all of the trials, though still being slower with it than with the cane. Qualitative interviews revealed a high level of usability and user experience. Overall, the results indicate the general processability of spatial information through sensory substitution using haptic, vibrotactile interfaces. Further research would be required to evaluate the prototype’s capabilities after extensive training and to derive a fully functional navigation aid from its features.
... In the last decade, many research projects were developed to compensate for the loss of vision, most of them relying on sensory substitution. Sensory substitution is grounded in the idea of replacing an impaired or lost sense with another sense [1]. Paul Bach-y-Rita, pioneer in this field, aimed to work at restoring visual functions in blind people [2]. ...
... The usual sensory substitution devices (SSDs) aspire to efficiently convey visual data in real-time via touch or hearing. This data may include the shape and/or size of an object, the perceived (ego-centered) distance from it, or the color of the object [1,3]. Typical SSDs consist of the following three components: a sensor, a processing unit that simplifies and converts the sensory information, and a user interface to transmit this information to the user. ...
... All SSDs are based on the sensory substitution motor loop (cf. Figure 1). This loop presents the embodiment of perceptions: (1) The sensor (usually a camera) is pointed in a given (ego-centered) direction (to the target). (2) A cloud computing or a computer interprets the image and converts it to tactile or audio stimulations. ...
Article
Full-text available
This paper introduces the design of a novel indoor and outdoor mobility assistance system for visually impaired people. This system is named the MAPS (Mobility Assistance Path Planning and orientation in Space), and it is based on the theoretical frameworks of mobility and spatial cognition. Its originality comes from the assistance of two main functions of navigation: locomotion and wayfinding. Locomotion involves the ability to avoid obstacles, while wayfinding involves the orientation in space and ad hoc path planning in an (unknown) environment. The MAPS architecture proposes a new low-cost system for indoor–outdoor cognitive mobility assistance, relying on two cooperating hardware feedbacks: the Force Feedback Tablet (F2T) and the TactiBelt. F2T is an electromechanical tablet using haptic effects that allow the exploration of images and maps. It is used to assist with maps’ learning, space awareness emergence, path planning, wayfinding and effective journey completion. It helps a VIP construct a mental map of their environment. TactiBelt is a vibrotactile belt providing active support for the path integration strategy while navigating; it assists the VIP localize the nearest obstacles in real-time and provides the ego-directions to reach the destination. Technology used for acquiring the information about the surrounding space is based on vision (cameras) and is defined with the localization on a map. The preliminary evaluations of the MAPS focused on the interaction with the environment and on feedback from the users (blindfolded participants) to confirm its effectiveness in a simulated environment (a labyrinth). Those lead-users easily interpreted the system’s provided data that they considered relevant for effective independent navigation.
... The glabrous skin of the human hand is innervated by 12 diverse types of afferent fibres, which are responsible for perceiving pain, thermal, kinesthetic, and tactile (e.g., form, texture, skin's motion, motion of exogenous objects, pressure) sensations [1]. This haptic information is considered crucial for facilitating an effective humanmachine/computer interaction [2], [3], [4], [5], [6]. Indeed, haptic information is needed in human-machine/computer-interaction systems where the user may operate virtual or robotic hands, or simply interact with virtual environments or physical objects in distance. ...
... Electrotactile feedback has been successfully implemented in human-computer [9], [21] and human-machine interactions [2], [22] for various hand-based applications such as prosthetics [22], virtual reality [21], robotic teleoperation [23], [24], transparent haptic displays [25], [26], and augmented haptics [27], [28]. Thanks to the reduced form factor and price, electrotactile feedback is particularly promising for everyday applications, such as gaming and entertainment, as well as Augmented Reality (AR), where users need to interact with real and virtual objects at the same time [4], [14]. ...
... Nevertheless, these encouraging results were not limited to the stimulation of the fingers. Another area of interest on the hand, for facilitating human-machine/computer interactions, is the palm [2], [4], [9], [11], [60]. However, the skin of the palm folds and stretches in various ways especially in its central area [72], which may affect the impedance of electrical stimulation. ...
Preprint
Full-text available
Haptic feedback is critical in a broad range of human-machine/computer-interaction applications. However, the high cost and low portability/wearability of haptic devices remain unresolved issues, severely limiting the adoption of this otherwise promising technology. Electrotactile interfaces have the advantage of being more portable and wearable due to their reduced actuators’ size, as well as their lower power consumption and manufacturing cost. The applications of electrotactile feedback have been explored in human-computer interaction and human-machine-interaction for facilitating hand-based interactions in applications such as prosthetics, virtual reality, robotic teleoperation, surface haptics, portable devices, and rehabilitation. This paper presents a technological overview of electrotactile feedback, as well a systematic review and meta-analysis of its applications for hand-based interactions. We discuss the different electrotactile systems according to the type of application. We also discuss over a quantitative congregation of the findings, to offer a high-level overview into the state-of-art and suggest future directions. Electrotactile feedback systems showed increased portability/wearability, and they were successful in rendering and/or augmenting most tactile sensations, eliciting perceptual processes, and improving performance in many scenarios. However, knowledge gaps (e.g., embodiment), technical (e.g., recurrent calibration, electrodes’ durability) and methodological (e.g., sample size) drawbacks were detected, which should be addressed in future studies.
... Initial research conducted by P. Bach-Y Rita [3] established that the back can be used to mediate visual stimulus to the brain. Then, he showed that trained blind people can navigate and "visually perceive" features of the environment via electric pulses on the tongue that encode camera images [4,5]. The substitution from vision to audition has also been studied and Sensory Substitution Devices (SSDs) have been designed for that purpose [6]. ...
... To analyze the effect of the mode and the course number on the localization error, two-factors repeated measure ANOVAs were performed separately on Group 2D3D that began with 2D and Group 3D2D that began with 3D. For both groups, no significant effect of the mode (Group 2D3D : 4 The repeated measure ANOVA is used when data are collected from the same participants under different conditions or at different times. 5 The several factors ANOVA is used for analyzing the effect of several independent variables on one outcome variable 6 Non integer degrees of freedom for the F statistic are due to the Greenhouse Geisser correction applied to adjust for the lack of sphericity of data variances, which is a necessary assumption to conduct a repeated-measure ANOVA Yet, no participants is considered an outliers for session 2 (with an average course time above 400 seconds) and average times decrease across courses and sessions (Fig. 5). ...
Preprint
Full-text available
Early visual to auditory substitution devices encode 2D monocular images into sounds while more recent devices use distance information from 3D sensors. This study assesses whether the addition of sound-encoded distance in recent systems helps to convey the "where" information. This is important to the design of new sensory substitution devices. We conducted experiments for object localization and navigation tasks with a handheld visual to audio substitution system. It comprises 2D and 3D modes. Both encode in real-time the position of objects in images captured by a camera. The 3D mode encodes in addition the distance between the system and the object. Experiments have been conducted with 16 blindfolded sighted participants. For the localization, participants were quicker to understand the scene with the 3D mode that encodes distances. On the other hand, with the 2D only mode, they were able to compensate for the lack of distance encoding after a small training. For the navigation, participants were as good with the 2D only mode than with the 3D mode encoding distance.
... The most common means to provide non-invasive somatosensory input are vibrotactile (Chatterjee et al., 2007;Cincotti et al., 2007;Antfolk et al., 2010;Leeb et al., 2013), electrotactile (Bach-y Rita and Kercel, 2003;Cincotti et al., 2012;Franceschi et al., 2016;Mrachacz-Kersting et al., 2017b;Corbet et al., 2018), mechanotactile (Patterson and Katz, 1992;Antfolk et al., 2013), or passive movement (Ramos-Murguialday et al., 2012Mrachacz-Kersting et al., 2017b;Randazzo et al., 2017). These modalities are employed for different purposes, including force feedback (Patterson and Katz, 1992;Antfolk et al., 2010Antfolk et al., , 2013, transmission of kinesthetic information for proprioceptive (Ramos-Murguialday et al., 2013;Randazzo et al., 2017) or navigational purposes (Bach-y Rita and Kercel, 2003), or encoded patterns with discrete (Chatterjee et al., 2007;Cincotti et al., 2007), or continuous properties (Franceschi et al., 2016). ...
... The most common means to provide non-invasive somatosensory input are vibrotactile (Chatterjee et al., 2007;Cincotti et al., 2007;Antfolk et al., 2010;Leeb et al., 2013), electrotactile (Bach-y Rita and Kercel, 2003;Cincotti et al., 2012;Franceschi et al., 2016;Mrachacz-Kersting et al., 2017b;Corbet et al., 2018), mechanotactile (Patterson and Katz, 1992;Antfolk et al., 2013), or passive movement (Ramos-Murguialday et al., 2012Mrachacz-Kersting et al., 2017b;Randazzo et al., 2017). These modalities are employed for different purposes, including force feedback (Patterson and Katz, 1992;Antfolk et al., 2010Antfolk et al., , 2013, transmission of kinesthetic information for proprioceptive (Ramos-Murguialday et al., 2013;Randazzo et al., 2017) or navigational purposes (Bach-y Rita and Kercel, 2003), or encoded patterns with discrete (Chatterjee et al., 2007;Cincotti et al., 2007), or continuous properties (Franceschi et al., 2016). ...
Article
Full-text available
Motor imagery is a popular technique employed as a motor rehabilitation tool, or to control assistive devices to substitute lost motor function. In both said areas of application, artificial somatosensory input helps to mirror the sensorimotor loop by providing kinesthetic feedback or guidance in a more intuitive fashion than via visual input. In this work, we study directional and movement-related information in electroencephalographic signals acquired during a visually guided center-out motor imagery task in two conditions, i.e., with and without additional somatosensory input in the form of vibrotactile guidance. Imagined movements to the right and forward could be discriminated in low-frequency electroencephalographic amplitudes with group level peak accuracies of 70% with vibrotactile guidance, and 67% without vibrotactile guidance. The peak accuracies with and without vibrotactile guidance were not significantly different. Furthermore, the motor imagery could be classified against a resting baseline with group level accuracies between 76 and 83%, using either low-frequency amplitude features or μ and β power spectral features. On average, accuracies were higher with vibrotactile guidance, while this difference was only significant in the latter set of features. Our findings suggest that directional information in low-frequency electroencephalographic amplitudes is retained in the presence of vibrotactile guidance. Moreover, they hint at an enhancing effect on motor-related μ and β spectral features when vibrotactile guidance is provided.
... The transmission of human body movements to other devices through wearable smart bracelets [1] have attracted more and more attentions in the eld of human-machine interface (HMI) applications [2]. ...
... wristmovement ( serialnumber in Fig.2) ngermovement(serialnumber in Fig.2) coordinationaction(serialnumber in Fig.2) wrist exion 1 Hand close (19) Hand close with wrist exion (7) wrist exion 1 Hand open (20) Hand open with wrist exion (13) wrist extension (2) Hand close (19) Hand close with wrist extension (8) wrist extension (2) Hand open (20) Hand open with wrist extension (14) wrist radial (3) Hand close (19) Hand close with wrist radial (9) wrist radial (3) Hand open (20) Hand open with wrist radial (15) wrist ulnar (4) Hand close (19) Hand close with wrist ulnar (10) wrist ulnar (4) Hand open (20) Hand open with wrist ulnar (16) wrist pronation (5) Hand close (19) Hand close with wrist pronation (11) wrist pronation (5) Hand open (20) Hand open with wrist pronation (17) wrist supination (6) Hand close (19) Hand close with wrist supination (12) wrist supination (6) Hand open (20) Hand open with wrist supination (18) Classi cation ...
Preprint
Full-text available
Background: The transmission of human body movements to other devices through wearable smart bracelets have attracted more and more attentions in the field of human-machine interface (HMI) applications. However, due to the limitation of the collection range of wearable bracelets, it is necessary to study the relationship between the superposition of wrist and finger motion and their cooperative motion to simplify the collection system of the device. Methods: The multi-channel high-density surface electromyogram (HD-sEMG) signal has high spatial resolution and can improve the accuracy of multi-channel fitting. In this study, we quantified the HD-sEMG forearm spatial activation features of 256 channels of hand movement, and performed a linear fitting of the quantified features of fingers and wrist movements to verify the linear superposition relationship between fingers and wrist cooperative movements and their independent movements. The most important thing is to classify and predict the results of the fitting and the actual measured fingers and wrist cooperative actions by four commonly used classifiers: Linear Discriminant Analysis (LDA) ,K-Nearest Neighbor (KNN) ,Support Vector Machine (SVM) and Random Forest (RF), and evaluate the performance of the four classifiers in gesture fitting in detail according to the classification results. Results: In a total of 12 kinds of synthetic gesture actions, in the three cases where the number of fitting channels was selected as 8, 32 and 64, four classifiers of LDA, SVM, RF and KNN are used for classification prediction. When the number of fitting channels was 8, the prediction accuracy of LDA classifier was 99.70%, the classification accuracy of KNN was 99.40%, the classification accuracy of SVM was 99.20%, and the classification accuracy of RF was 93.75%. When the number of fitting channels was 32, the accuracy of LDA was 98.51%, the classification accuracy of KNN was 97.92%, the accuracy of SVM is 96.73%, and the accuracy of RF was 86.61%. When the number of fitting channels is 64, the accuracy of LDA is 95.83%, the classification accuracy of KNN is 91.67%, the accuracy of SVM is 86.90%, and the accuracy of RF is 83.30%. Conclusion: It can be seen from the results that when the number of fitting channels is 8, the classification accuracy of the three classifiers of LDA, KNN and SVM is basically the same, but the time-consuming of SVM is very small. When the amount of data is large, the priority should be selected SVM as the classifier. When the number of fitting channels increases, the classification accuracy of the LDA classifier will be higher than the other three classifiers, so the LDA classifier should be more appropriate. The classification accuracy of the RF classifier in this type of problem has always been far lower than the other three classifiers, so it is not recommended to use the RF classifier as a classifier for gesture stacking related work.
... Products for visual-auditory interaction have emerged, significantly improving human life. The human sense of touch is also indispensable but under-appreciated [1]- [3]. In contrast to vision and audition, tactile perception is distributed throughout the body. ...
... Sensory substitution refers to the translation of sensory information that is normally available via one sense to another [11]. Sensory substitution can occur across sensory systems, such as touch-to-sight, or within a sensory system such as touch-to-touch [1]. As early as 1983, Saunders et al. [7] proposed transmitting information through direct electrical stimulation of the skin. ...
Article
Full-text available
With the increased demands of human-machine interaction, haptic feedback is becoming increasingly critical. However, the high cost, large size and low efficiency of current haptic systems severely hinder further development. As a portable and efficient technology, cutaneous electrotactile stimulation has shown promising potential for these issues. This paper presents a review on and insight into cutaneous electrotactile perception and its applications. Research results on perceptual properties and evaluation methods have been summarized and discussed to understand the effects of electrotactile stimulation on humans. Electrotactile applications are presented in categories to understand the methods and progress in various fields such as prostheses control, sensory substitution, sensory restoration and sensorimotor restoration. State of the art has demonstrated the superiority of electrotactile feedback, its efficiency and its flexibility. However, the complex factors and the limitations of evaluation methods made it challenging for precise electrotactile control. Groundbreaking innovation in electrotactile theory is expected to overcome challenges such as precise perception control, information capacity increasing, comprehension burden reducing and implementation costs.
... Sensory substitution refers to the brain's ability to use one sensory modality (e.g., touch) to supply environmental information normally gathered by another sense (e.g., vision). Numerous studies have demonstrated that humans can adapt to changes in sensory inputs, even when they are fed into the wrong channels [4,5,24,62]. But difficult adaptations-such as learning to "see" by interpreting visual information emitted from a grid of electrodes placed on one's tongue [5], or learning to ride a "backwards" bicycle [62]-require months of training to achieve mastery. ...
... Numerous studies have demonstrated that humans can adapt to changes in sensory inputs, even when they are fed into the wrong channels [4,5,24,62]. But difficult adaptations-such as learning to "see" by interpreting visual information emitted from a grid of electrodes placed on one's tongue [5], or learning to ride a "backwards" bicycle [62]-require months of training to achieve mastery. Can we do better, and create artificial systems that can rapidly adapt to sensory substitutions, without the need to be retrained? ...
Preprint
Full-text available
In complex systems, we often observe complex global behavior emerge from a collection of agents interacting with each other in their environment, with each individual agent acting only on locally available information, without knowing the full picture. Such systems have inspired development of artificial intelligence algorithms in areas such as swarm optimization and cellular automata. Motivated by the emergence of collective behavior from complex cellular systems, we build systems that feed each sensory input from the environment into distinct, but identical neural networks, each with no fixed relationship with one another. We show that these sensory networks can be trained to integrate information received locally, and through communication via an attention mechanism, can collectively produce a globally coherent policy. Moreover, the system can still perform its task even if the ordering of its inputs is randomly permuted several times during an episode. These permutation invariant systems also display useful robustness and generalization properties that are broadly applicable. Interactive demo and videos of our results: https://attentionneuron.github.io/
... Lower pixels are represented by lower frequencies, and higher pixels by high frequencies; horizontal location is relayed by stereo tuning if using two earphones or by the timing of the scan from left-to-right (Proulx et al., 2015). Other sensory substitution devices turn an image into something that can be touched, such as the BrainPort that uses electrical stimulation on the tongue (Bach-y- Rita and Kercel, 2003). Which sense is the best to substitute for impaired vision? ...
Article
Full-text available
Research on the origin of vision and vision loss in naturally “blind” animal species can reveal the tasks that vision fulfills and the brain's role in visual experience. Models that incorporate evolutionary history, natural variation in visual ability, and experimental manipulations can help disentangle visual ability at a superficial level from behaviors linked to vision but not solely reliant upon it, and could assist the translation of ophthalmological research in animal models to human treatments. To unravel the similarities between blind individuals and blind species, we review concepts of 'blindness' and its behavioral correlates across a range of species. We explore the ancestral emergence of vision in vertebrates, and the loss of vision in blind species with reference to an evolution-based classification scheme. We applied phylogenetic comparative methods to a mammalian tree to explore the evolution of visual acuity using ancestral state estimations. Future research into the natural history of vision loss could help elucidate the function of vision and inspire innovations in how to address vision loss in humans.
... 2,3,4 Neuroplasticity can be achieved through multiple stimulation as in passive repetitive movements, learning new tasks, or sensory deprivation. 5 Blind individuals do not have the advantage of vision to perceive the external world. Hence, they depend on other special senses to be aware of the external world. ...
Article
BACKGROUND The objectives of the study were to assess and compare touch sensation of dominant and non-dominant hands among blind since birth, early onset blind & late onset blind participants using Moberg’s test and determine if the time of onset of blindness affected the touch sensation. METHODS 50 blind participants from various colleges in Mumbai were assessed. Detailed history about onset of blindness, motor dominance etc. of blind was asked. Participants were instructed to pick up objects (suggested by Moberg) one at a time, as fast as possible, and place them into a box using dominant and non-dominant hands alternatively. Kruskal Wallis Test was used for analysis. RESULTS Average values of Moberg’s test of dominant & non - dominant hands of blind since birth versus late onset blind were statistically significant and those of early onset blind versus late onset blind were statistically significant. Average values of Moberg’s test of blind since birth versus early onset blind were not statistically significant. Thus touch sensation was improved more in blind since birth & early onset blind compared to late onset blind participants. CONCLUSIONS Thus we conclude that in the absence of visual stimuli, touch sensation in blind since birth and early onset blind is improved compared to late onset blind participants. KEY WORDS Blind, Cross - Modal Synaptic Plasticity, Substitution of Sense. Moberg's Pick - up Test, Critical Period
... Augmented verbal feedback plays a major role in helping the participants to become aware of how they perform, and correct any compensatory movements in order to optimize their movement control (16) .Therefore, they showed significant improvement in their LLL-STS symmetry in an optimal condition ( Table 2). Bach-Y-Rita and Kercel (17) indicated that the human movement system had a great capability to use alternative sources of input helping them to successfully carry out a required task. Then repetitive practice using verbal commands may promote functional ability of these individuals. ...
Article
Full-text available
Existing evidence on lower limb loading symmetry and movement stability of patients with stroke commonly involves data during standing and stepping, without clear evidence for sit-to-stand (STS) ability. This study investigated the lower limb loading during sit-to-stand (LLL-STS) in 39 ambulatory individuals with chronic stroke during usual and optimal conditions using digital load cells as compared to those found in 10 healthy individuals. During the tests, participants were instructed to perform a sit-to-stand movement in 2 conditions, including 1) at their usual manner, and 2) at their optimal manner with the attempt to put their bodyweight on the lower limbs as symmetrically as they could. The findings indicated that the participants had maximal LLL-STS of 47% and 75% of their bodyweight in the affected and non-affected limb, respectively, resulting in the LLL-STS symmetry of 62%, whereas the LLL-STS symmetry in healthy individuals were nearly 100%. However, the LLL-STS symmetry of stroke participants was significantly increased to 73% when they attempted to take bodyweight onto both lower extremities equally. The findings suggested that the participants retained some capability that they did not usually access. The findings suggested the use of verbal commands as an alternative rehabilitation strategy to promote LLL-STS symmetry of individuals with chronic stroke.
... Sensory substitution is to encode the missing sensory information and route it to the nervous system via alternative, intact sensory channels. For example, auditory and haptic feedback has been used to surrogate visual feedback for the blinded to explore the surroundings [16]. For people with upper-limb amputations, sensory substitution has been shown to provide effective sensory feedback for controlling robotic arms [17]. ...
Article
Full-text available
Background For people with lower-limb amputations, wearing a prosthetic limb helps restore their motor abilities for daily activities. However, the prosthesis's potential benefits are hindered by limited somatosensory feedback from the affected limb and its prosthesis. Previous studies have examined various sensory substitution systems to alleviate this problem; the prominent approach is to convert foot–ground interaction to tactile stimulations. However, positive outcomes for improving their postural stability are still rare. We hypothesized that the sensory substiution system based on surrogated tactile stimulus is capable of improving the standing stability among people with lower-limb amputations. Methods We designed a wearable device consisting of four pressure sensors and two vibrators and tested it among people with unilateral transtibial amputations (n = 7) and non-disabled participants (n = 8). The real-time measurements of foot pressure were fused into a single representation of foot–ground interaction force, which was encoded by varying vibration intensity of the two vibrators attached to the participants’ forearm. The vibration intensity followed a logarithmic function of the force representation, in keeping with principles of tactile psychophysics. The participants were tested with a classical postural stability task in which visual disturbances perturbed their quiet standing. Results With a brief familiarization of the system, the participants exhibited better postural stability against visual disturbances when switching on sensory substitution than without. The body sway was substantially reduced, as shown in head movements and excursions of the center of pressure. The improvement was present for both groups of participants and was particularly pronounced in more challenging conditions with larger visual disturbances. Conclusions Substituting otherwise missing foot pressure feedback with vibrotactile signals can improve postural stability for people with lower-limb amputations. The design of the mapping between the foot–ground interaction force and the tactile signals is essential for the user to utilize the surrogated tactile signals for postural control, especially for situations that their postural control is challenged.
... In any given machine, the designer must have in mind how the user will interact with it in a manner that depicts cost-effectiveness and operational efficiency [9]. To achieve optimal effectiveness in the interface usability, satisfaction, and user interaction, the system engineers must adhere to the principles of human-machine interactions and seek to enhance the communication between the two by coming up with better designs of the machine interfaces, based on the identified loopholes from the current systems [10]. As such, any human-machine interface should be oriented towards enhancing machine operational efficiency in the short term and boost organizational performance in the long term. ...
Article
Full-text available
The twenty-first century has seen a vast technological revolution characterized by the development of cyber-physical systems, integration of things, and new and computationally improved machines and systems. However, there have been seemingly little strides in the development of user interfaces, specifically for industrial machines and equipment. The aim of this study was to assess the efficiency of the human-machine interfaces in the Kenyan context in providing a consistent and reliable working environment for industrial machine operators. The researcher employed a convenient purposive sampling to select 15 participants who had at least two years of hands-on experience in machines operation, control, or instrumentation. The results of the study are herein presented, including the recommendations to enhance workforce productivity and efficiency.
... Throughout human history, people have devised a range of epistemic tools to grant this ability-from thermoscopes and seismoscopes, which convert temperature and ground motion into visual form, to Chladni plates [33], which make the modes of vibration of rigid surfaces visible. More modern systems with sensory substitution [5] translate stimuli in the environment from one sensory modality to another-in fact vir-tually all information visualization systems translate data into a visual form [69]. Some systems (Fig. 6 [45,100]) even overlay visualizations next to objects in physical environments [102]. ...
Preprint
Full-text available
We explore how the lens of fictional superpowers can help characterize how visualizations empower people and provide inspiration for new visualization systems. Researchers and practitioners often tout visualizations' ability to "make the invisible visible" and to "enhance cognitive abilities." Meanwhile superhero comics and other modern fiction often depict characters with similarly fantastic abilities that allow them to see and interpret the world in ways that transcend traditional human perception. We investigate the intersection of these domains, and show how the language of superpowers can be used to characterize existing visualization systems and suggest opportunities for new and empowering ones. We introduce two frameworks: The first characterizes seven underlying mechanisms that form the basis for a variety of visual superpowers portrayed in fiction. The second identifies seven ways in which visualization tools and interfaces can instill a sense of empowerment in the people who use them. Building on these observations, we illustrate a diverse set of "visualization superpowers" and highlight opportunities for the visualization community to create new systems and interactions that empower new experiences with data.
... A detailed block diagram for the Catheter Motion System is given in Figure 5.1. In this section, task control with the Human-Machine interface is mentioned [82,83]. Then, the control of the Robotic Operating System (ROS) and the block diagram of each system are presented. ...
Thesis
Full-text available
Due to the risks of traditional surgical methods, the robotic technology has become more popular in the medical applications. Making large incisions in the patient's body may cause a permanent deformation on the patient during surgical operations. The surgical operations may become minimally invasive procedures by sending catheter systems to the channels in the body when possible. Apparently, one of the important key points in medical engineering is the control of the catheter system. The traditional catheter systems are controlled by surgical specialists using strings placed inside the catheter. This control strategy not only results in low positioning accuracy but also requires expertise to control. Besides, the catheter sizes became larger due to peripheral components. Therefore, automated control is a desired approach for providing high accuracy and versatility in catheter-based surgical operations. In this thesis, the motion control of the catheter system used in the medical field in 2D space is concerned. Firstly, two actuator systems were designed to advance the catheter and to guide of the catheter tip in body vessels or channels. The advancement of the catheter was controlled by a friction drive system in a closed-loop manner. A novel sliding mode control was proposed to control the drive system. A millimeter size permanent magnet was placed at the tip of the catheter tube to control the orientation of the catheter. An untethered permanent magnet actuator is used to manipulate the position of the millimeter size permanent magnet directly and the orientation of the iv catheter tube indirectly. The feedback for motion control was provided via machine vision setup. These three systems were operated synchronously using the Robot Operating System(ROS). The performance of the motion control system was verified with a test rig which is a planar projection of 3D bronchial model data. The results showed that the proposed control methodology and the designed system can control catheter motion
... IMUs are particularly well suited for sensory-augmented balance and coordination training, since they are widely integrated into wearable or portable wireless devices, such as smartwatches and phones. Regardless of the specific type of feedback modality (vibrotactile feedback [42], surface electrode stimulation of the vestibular nerve [43], electric currents applied to the tongue [44][45][46], auditory [47,48], visual [49], or multimodal feedback [50]), participants with sensory disabilities (e.g., vestibular disabilities [42,51], peripheral neuropathy [52], and motor disabilities (e.g., Parkinson's disease [53][54][55]) have used SA cues to make postural and gait-related corrections. ...
Article
Full-text available
Intensive balance and coordination training is the mainstay of treatment for symptoms of impaired balance and mobility in individuals with hereditary cerebellar ataxia. In this study, we compared the effects of home-based balance and coordination training with and without vibrotactile SA for individuals with hereditary cerebellar ataxia. Ten participants (five males, five females; 47 ± 12 years) with inherited forms of cerebellar ataxia were recruited to participate in a 12-week crossover study during which they completed two six-week blocks of balance and coordination training with and without vibrotactile SA. Participants were instructed to perform balance and coordination exercises five times per week using smartphone balance trainers that provided written, graphic, and video guidance and measured trunk sway. The pre-, per-, and post-training performance were assessed using the Scale for the Assessment and Rating of Ataxia (SARA), SARAposture&gait sub-scores, Dynamic Gait Index, modified Clinical Test of Sensory Interaction in Balance, Timed Up and Go performed with and without a cup of water, and multiple kinematic measures of postural sway measured with a single inertial measurement unit placed on the participants’ trunks. To explore the effects of training with and without vibrotactile SA, we compared the changes in performance achieved after participants completed each six-week block of training. Among the seven participants who completed both blocks of training, the change in the SARA scores and SARAposture&gait sub-scores following training with vibrotactile SA was not significantly different from the change achieved following training without SA (p>0.05). However, a trend toward improved SARA scores and SARAposture&gait sub-scores was observed following training with vibrotactile SA; compared to their pre-vibrotacile SA training scores, participants significantly improved their SARA scores (mean=−1.21, p=0.02) and SARAposture&gait sub-scores (mean=−1.00, p=0.01). In contrast, no significant changes in SARAposture&gait sub-scores were observed following the six weeks of training without SA compared to their pre-training scores immediately preceding the training block without vibrotactile SA (p>0.05). No significant changes in trunk kinematic sway parameters were observed as a result of training (p>0.05). Based on the findings from this preliminary study, balance and coordination training improved the participants’ motor performance, as captured through the SARA. Vibrotactile SA may be a beneficial addition to training regimens for individuals with hereditary cerebellar ataxia, but additional research with larger sample sizes is needed to assess the significance and generalizability of these findings.
... Sensory substitution provides another argument for a single model (Bach-y Rita and Kercel, 2003). For example, it is possible to build tactile visual aids for blind people as follows. ...
Preprint
Full-text available
Inspired by progress in large-scale language modeling, we apply a similar approach towards building a single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens. In this report we describe the model and the data, and document the current capabilities of Gato.
... Cela fait appel au concept de substitution sensorielle, originellement développé par Geldard (1960) et Bach-yRita et al. (1969) qui est présenté plus largement ci-après.6.2 Concept de Substitution sensorielleLe concept de substitution sensorielle est défini comme une manière de transmettre une ou des informations par le biais d'une modalité sensorielle autre que la modalité originellement dédiée à cette information (Bach-y Rita etKercel, 2003). L'exemple le plus connu est celui du braille. ...
Thesis
Full-text available
La perte d'autonomie engendrée par l'amputation du membre supérieur touche, en France, une population jeune et active. Les répercussions sur le plan physique et psychologique en font une problématique à la fois clinique, technique et scientifique. La faible prévalence de l'amputation du membre supérieur fait qu'elle est considérée comme une pathologie orpheline. L'appareillage proposé aux patients reste très limité dans ses commandes malgré les progrès technologiques et les multiples fonctionnalités apportées par les prothèses de dernière génération. Le contrôle de ces outils reste complexe et non intuitif, ce qui a pour conséquence un taux d'abandon élevé. Les travaux sur les prothèses myoélectriques ont mis en avant que pour être pleinement fonctionnelle et utilisée par les patients, la prothèse devrait pouvoir (i) générer des réponses réflexes, et (ii) redonner une sensorialité perdue. Durant cette thèse, nous avons exploré ces deux aspects que sont les comportements réflexes et la substitution sensorielle. La première partie étudie la régulation de la commande motrice par les boucles sensorimotrices de bas niveau. Nous avons testé un réseau simplifié connecté à un modèle musculo-squelettique de bras dans l'objectif de produire des mouvements d'amplitudes et de durées déterminées. Les capacités du réseau à produire ces comportements ont été évaluées par trois algorithmes d'optimisation. Cette étude nous a permis d’explorer l’espace des comportements possibles du système neuro-mécanique. Bien que très simplifié, le système était capable de produire des mouvements biologiquement plausibles en présence de gravité. Ce réseau simplifié montre une grande richesse d’expressions comportementales où un même mouvement peut être produit par plusieurs combinaisons de paramètres. Ce type de réseau est un candidat potentiel pour faire le lien entre les commandes descendantes basiques telles que les enregistrements d'activité musculaire (EMG) et les mouvements produits par les moteurs de la prothèse. De plus, cette structure a le potentiel de produire des réponses réflexes. Concernant l'étude de la substitution sensorielle, nous avons mis au point un dispositif produisant des stimulations vibrotactiles permettant de donner au sujet les informations de position angulaire de leur coude. Nous l'avons utilisé dans plusieurs expérimentations et mis en évidence les bonnes capacités de discrimination spatiale chez des patients amputés et des sujets sains. Nous l'avons ensuite utilisé dans un contrôle en ligne d'un bras virtuel où les vibrations permettaient de donner des repères spatiaux dans une tâche d'atteinte de cibles. Cette expérience a révélé que le feedback proprioceptif permettait d'améliorer la performance par rapport à une condition sans feedback. En revanche, si l'ajout du feedback proprioceptif à la vision n'a pas amélioré la performance, il ne l'a pas dégradé non plus. De plus, le contrôle en présence des deux feedback a été le plus apprécié des sujets. Ce travail nous a permis d'enrichir les connaissances autour de la commande des prothèses myoélectriques avec pour objectif de se rapprocher du contrôle le plus naturel possible.
... The images obtained from the external environment with cameras placed on the foreheads of the visually impaired people were transmitted to the electrodes on their backs in real-time. Again, in the product named Brainport [10], patented by Paul Bach-y-Rita, the image is created on the tongue surface with electrical signals. ...
Conference Paper
In this work, an innovative framework for an electro-tactile display system for blind and impaired persons has been proposed. The creation of visual data over another sense is modelled in this proposed system using the neuroplasticity approach. The suggested framework's aim is to simulate a tactile stimulus in the hands and to create the vision on the skin of the hands. It is aimed to process the depth map of the surroundings in real-time as visual perception, and a Microsoft Kinect camera with infrared sensors is utilized for this purpose. For hardware and software applications to be created in future studies, a theoretical connection has been designed between electro-tactile display and depth map. The proposed electro-tactile system and depth map has been simulated briefly.
... The good performance maintained with the addition of the vibrotactile feedback and the preference for the multimodal condition could be explained by the congruency between the feedback signal and the information it delivers. This congruency has been reported as a key element for the use and integration of a sensory-substitution system [56]. In fact, it has been shown that when the feedback signal is not congruent or is in conflict with vision, it is not integrated in the motor control strategy [27,57]. ...
Article
Full-text available
Background Current myoelectric prostheses lack proprioceptive information and rely on vision for their control. Sensory substitution is increasingly developed with non-invasive vibrotactile or electrotactile feedback, but most systems are designed for grasping or object discriminations, and few were tested for online control in amputees. The objective of this work was evaluate the effect of a novel vibrotactile feedback on the accuracy of myoelectric control of a virtual elbow by healthy subjects and participants with an upper-limb amputation at humeral level. Methods Sixteen, healthy participants and 7 transhumeral amputees performed myoelectric control of a virtual arm under different feedback conditions: vision alone (VIS), vibration alone (VIB), vision plus vibration (VIS + VIB), or no feedback at all (NO). Reach accuracy was evaluated by angular errors during discrete as well as back and forth movements. Healthy participants’ workloads were assessed with the NASA-TLX questionnaire, and feedback conditions were ranked according to preference at the end of the experiment. Results Reach errors were higher in NO than in VIB, indicating that our vibrotactile feedback improved performance as compared to no feedback. Conditions VIS and VIS+VIB display similar levels of performance and produced lower errors than in VIB. Vision remains therefore critical to maintain good performance, which is not ameliorated nor deteriorated by the addition of vibrotactile feedback. The workload associated with VIB was higher than for VIS and VIS+VIB, which did not differ from each other. 62.5% of healthy subjects preferred the VIS+VIB condition, and ranked VIS and VIB second and third, respectively. Conclusion Our novel vibrotactile feedback improved myoelectric control of a virtual elbow as compared to no feedback. Although vision remained critical, the addition of vibrotactile feedback did not improve nor deteriorate the control and was preferred by participants. Longer training should improve performances with VIB alone and reduce the need of vision for close-loop prosthesis control.
... These techniques involve passive stretching, contraction and relaxation of specific muscles groups in order to improve their flexibility and to stimulate the sensory function, muscle tone and recovery of movement patterns. Some key elements for motor and sensory functional recovery (Jang, 2013) are repetition of movement patterns (Zbogar et al., 2017), somatosensory stimulation (Hara, 2008) and the application of stimuli outside the motor and sensory pathways (visual, auditory, or proprioceptive) (Bach-y-Rita and Kercel, 2003;Bento et al., 2012;Takeuchi and Izumi, 2012;Galińska, 2015). These neurorehabilitation strategies make possible to re-educate neural tissue that is not completely damaged or to reactivate other areas to form new synaptic connections (Gordon, 2005). ...
Article
Full-text available
Brain-Computer Interface (BCI) is a technology that uses electroencephalographic (EEG) signals to control external devices, such as Functional Electrical Stimulation (FES). Visual BCI paradigms based on P300 and Steady State Visually Evoked potentials (SSVEP) have shown high potential for clinical purposes. Numerous studies have been published on P300- and SSVEP-based non-invasive BCIs, but many of them present two shortcomings: (1) they are not aimed for motor rehabilitation applications, and (2) they do not report in detail the artificial intelligence (AI) methods used for classification, or their performance metrics. To address this gap, in this paper the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) methodology was applied to prepare a systematic literature review (SLR). Papers older than 10 years, repeated or not related to a motor rehabilitation application, were excluded. Of all the studies, 51.02% referred to theoretical analysis of classification algorithms. Of the remaining, 28.48% were for spelling, 12.73% for diverse applications (control of wheelchair or home appliances), and only 7.77% were focused on motor rehabilitation. After the inclusion and exclusion criteria were applied and quality screening was performed, 34 articles were selected. Of them, 26.47% used the P300 and 55.8% the SSVEP signal. Five applications categories were established: Rehabilitation Systems (17.64%), Virtual Reality environments (23.52%), FES (17.64%), Orthosis (29.41%), and Prosthesis (11.76%). Of all the works, only four performed tests with patients. The most reported machine learning (ML) algorithms used for classification were linear discriminant analysis (LDA) (48.64%) and support vector machine (16.21%), while only one study used a deep learning algorithm: a Convolutional Neural Network (CNN). The reported accuracy ranged from 38.02 to 100%, and the Information Transfer Rate from 1.55 to 49.25 bits per minute. While LDA is still the most used AI algorithm, CNN has shown promising results, but due to their high technical implementation requirements, many researchers do not justify its implementation as worthwile. To achieve quick and accurate online BCIs for motor rehabilitation applications, future works on SSVEP-, P300-based and hybrid BCIs should focus on optimizing the visual stimulation module and the training stage of ML and DL algorithms.
... These devices are also denoted as cognitive prostheses (Liu et al., 2018). They can either make use of the tactile sense (with devices as simple as a mobility cane (Bach-y Rita and W. Kercel, 2003), vibro-or electro-tactile stimulation of the skin or tongue (Bach-y rita et al., 1969;Bach-y Rita et al., 1998;Bach-y Rita and Collins, 1971;Deroy and Auvray, 2012)) or auditory sense (Capelle et al., 1998;Hanneton et al., 2010;Meijer, 1992;Liu et al., 2018). Liu et al. (2018), for example, developed an augmented reality device that identifies important aspects of the visual scene such as obstacles or objects, and describe them verbally to the user. ...
Thesis
Retinal degenerative diseases, such as retinitis pigmentosa or age-related macular degeneration, affect between 20 and 25 million people worldwide. These diseases lead to the gradual loss of photoreceptors, the light-sensitive cells of the retina, and therefore to blindness. Retinal prostheses are a promising strategy to restore sight to these patients. These devices are made of grids of electrodes or microphotodiodes positioned on or under the retina, or on the choroid -the vascular layer of the eye, to stimulate the remaining neurons of the retina by electrical impulses. The visual scene is filmed by a camera carried by the patient, and converted into an electrical stimulation pattern, to compensate for the loss of photoreceptors.Despite promising beginnings and considerable technical progress, with the latest generations of implants made up of several thousand independent stimulation units, the visual performance of equipped patients remains well below expectations. Patients who no longer perceived light are now able to locate objects, perform visual recognition tasks or simple spatial navigation. However, the functional benefits remain very limited. Several reasons can explain this performance. First of all, the perception of shapes is greatly affected due to the diffusion of current in the tissue and the activation of the distal parts of the axons: a given electrode does not produce a 'pixel' in the visual field, but an elongated and ill-defined shape. In addition, the electrical stimulation of different types of retinal cells, which normally encode different information about the visual stimulus, is nonspecific, so downstream visual centers receive corrupted information. Extensive efforts have been made to obtain a more focused and specific stimulation, to process the incoming image to transmit only the information necessary for visual performance, and to attempt to mimic the neural code using an appropriate encoder.In this thesis, we propose a new strategy for optimizing visual signal conversion in retinal prostheses based on the measurement of visual performance and patients' preferences. Users participate in a series of visual tasks, and their responses are used to continuously adjust the encoder according to a Bayesian optimization algorithm. Bayesian optimization is a powerful method to optimize functions whose analytical form is unknown without access to derivative information. It is especially used when the cost of a single function evaluation is high. It relies on a surrogate Bayesian model of the objective function which is used to query the system at locations informative about the optimum. The choice of querying a particular point is driven by a heuristic aiming at balancing exploration and exploitation. In this thesis, we validate this strategy in participants with normal or corrected vision, using a prosthetic vision simulator. We show that preference-based optimization improves the quality of participants' perception and that this subjective improvement is transferred to stimuli other than those used during optimization, and is accompanied by a better visual acuity. The use of an adaptive sampling scheme allows faster optimization compared to random sampling. We used a parameterization of the encoder based on a model predicting the perception of patients equipped with an implant. We show that the optimization procedure is robust to errors in this model. This robustness, together with the fact that this method does not make any particular assumption regarding the type of implant, suggests that it could be implemented to improve sight restoration in patients. In addition, we show that an optimization strategy based on personal preference is more effective than optimization based on performance.The challenges of applying preferential Bayesian optimization to retinal prostheses led us to develop new Bayesian optimization algorithms which outperform state-of-the-art methods in scenarios where the objective evaluation returns binary data, such as preference comparisons. In particular, many of the previously proposed method where either to computationally expensive to be used in a psychophysics context, or showed limited performance in practice. The new methods we proposed are based on the analytical decomposition of uncertainty about an evaluation outcome into its two components: aleatoric and epistemic, which allowed us to refine the definition of exploration in the context of Bayesian optimization. The optimization of retinal prostheses encoders is an example of a situation where the optimized system can operate in many different environments, which induces several challenges for efficient and robust performance improvement. We explore this type of problem, in the case where the evaluation of the system involves binary measurements, by generalizing binary Bayesian optimization. We propose new heuristics combining methods from Bayesian optimization and active learning to efficiently optimize the objective across contexts.
Article
Eyes-free operation of mobile devices is critical in situations where the visual channel is either unavailable or attention is needed elsewhere. In such situations, vibrotactile tracing along paths or lines can help users to navigate and identify symbols and shapes without visual information. In this paper, we investigated the applicability of different metrics that can measure the effectiveness of vibrotactile line tracing methods on touch screens. In two user studies, we compare trace Length Error, Area Error, and Fréchet Distance as alternatives to commonly used trace Time. Our results show that a lower Fréchet distance is correlated better with the comprehension of a line trace. Furthermore, we show that distinct feedback methods perform differently with varying geometric features in lines and propose a segmented line design for tactile line tracing studies. We believe the results will inform future designs of eyes-free operation techniques and studies.
Article
In Holoplexity, consciousness is hypothesized as predating the universe and as ultimately comprising all matter and energy. It comprises the very architecture of reality, including spatial dimensions (as we perceive them), and is even the causal factor of time itself. This theory then goes on to argue that consciousness is hidden from us, is timeless (but still generates time), and is the source from which all things flow. Humans are only able to appreciate and apprehend the aftermath of this interaction. Consciousness is then believed to exist in all things as manifested in both matter and electromagnetism, as well as non-spatial, non-temporal, phenomenal existence itself. Holoplexity seeks to offer an explanation for how information becomes human experience. From the advent of time to the reading of these words, the Holoplexity Theory of Consciousness makes a coherent explanation for it all.
Chapter
Artificial sensory substitution plays a crucial role in different domains, including prosthetics, rehabilitation and assistive technologies. The sense of touch has historically represented the ideal candidate to convey information on the external environment, both contact-related and visual, when the natural action-perception loop is broken or not available. This is particularly true for blind people assistance, in which touch elicitation has been used to make content perceivable (e.g. Braille text or graphical reproduction), or to deliver informative cues for navigation. However, despite the significant technological advancements for what concerns both devices for touch-mediated access to alphanumeric stimuli, and technology-enabled haptic navigation supports, the majority of the proposed solutions has met with scarce acceptance in end users community. Main reason for this, in our opinion, is the poor involvement of the blind people in the design process. In this work, we report on a user-centric approach that we successfully applied for haptics-enabled systems for blind people assistance, whose engineering and validation have received significant inputs from the visually-impaired people. We also present an application of our approach to the design of a single-cell refreshable Braille device and to the development of a wearable haptic system for indoor navigation. After a summary of our previous results, we critically discuss next avenues and propose novel solutions for touch-mediated delivery of information for navigation, whose implementation has been totally driven by the feedback collected from real end-users.
Article
We explore how the lens of fictional superpowers can help characterize how visualizations empower people and provide inspiration for new visualization systems. Researchers and practitioners often tout visualizations' ability to “make the invisible visible” and to “enhance cognitive abilities.” Meanwhile superhero comics and other modern fiction often depict characters with similarly fantastic abilities that allow them to see and interpret the world in ways that transcend traditional human perception. We investigate the intersection of these domains, and show how the language of superpowers can be used to characterize existing visualization systems and suggest opportunities for new and empowering ones. We introduce two frameworks: The first characterizes seven underlying mechanisms that form the basis for a variety of visual superpowers portrayed in fiction. The second identifies seven ways in which visualization tools and interfaces can instill a sense of empowerment in the people who use them. Building on these observations, we illustrate a diverse set of “visualization superpowers” and highlight opportunities for the visualization community to create new systems and interactions that empower new experiences with data. Material and illustrations are available under CC-BY 4.0 at osf.io/8yhfz.
Article
Bilateral vestibulopathy is characterized by bilateral functional impairment of the peripheral vestibular system. The usual symptoms are persistent unsteadiness and oscillopsia during head and body movements. It has been reported that sensory substitution therapy, that is, vestibular rehabilitation using a sensory substitution device, which transmits other sensory information to a stimulator as a substitute for defective vestibular information, might be effective in patients with bilateral and unilateral vestibulopathy. Recently, we developed a new wearable device, TPAD (tilt perception adjustment device), that transmits vibratory input containing head-tilt information to the mandible as a substitute for defective vestibular information. We assessed the patient using the dizziness handicaps inventory (DHI), gait analysis, and visual/somatosensory dependence of postural control in the patients with unilateral vestibulopathy. Three months after therapy in patients with unilateral vestibulopathy, the DHI and walking speed improved even when the subjects were not wearing the TPAD. Moreover, the index of the visual dependence of posture control that was evaluated by posturography with/without foam rubber in the eyes open or closed condition decreased. The findings suggested that the sensory vibratory substitution with a TPAD for defective vestibular information induced brain plasticity related to sensory re-weighting to reduce the visual dependence of posture control, resulting in the improvement of dizziness and imbalance even while not wearing the TPAD in vestibulopathy patients. We then investigated the effects of sensory substitution therapy using a TPAD in patients with bilateral vestibulopathy and normal subjects. Three months after sensory substitution therapy in patients with bilateral vestibulopathy, the DHI and area with eyes closed measured by posturography improved even when the subjects did not wear a TPAD. However, the gait parameters improved only under the condition of wearing a TPAD. These findings suggest that sensory vibratory substitution with a TPAD might serve as temporary replacement for defective vestibular information in patients with bilateral vestibulopathy. Moreover, wearing of the TPAD improved posture control under the eyes-closed condition with foam rubber measured by posturography in normal subjects. TPAD might be applicable as a wearable device for improving posture control, not only in patients with bilateral vestibulopathy, but also in those with presbyvestibulopathy.
Article
Full-text available
As robots become more ubiquitous, they will increasingly need to behave as our team partners and smoothly adapt to the (adaptive) human team behaviors to establish successful patterns of collaboration over time. A substantial amount of adaptations present themselves through subtle and unconscious interactions, which are difficult to observe. Our research aims to bring about awareness of co-adaptation that enables team learning. This paper presents an experimental paradigm that uses a physical human-robot collaborative task environment to explore emergent human-robot co-adaptions and derive the interaction patterns (i.e., the targeted awareness of co-adaptation). The paradigm provides a tangible human-robot interaction (i.e., a leash) that facilitates the expression of unconscious adaptations, such as "leading" (e.g., pulling the leash) and "following" (e.g., letting go of the leash) in a search-and-navigation task. The task was executed by 18 participants, after which we systematically annotated videos of their behavior. We discovered that their interactions could be described by four types of adaptive interactions: stable situations, sudden adaptations, gradual adaptations and active negotiations. From these types of interactions we have created a language of interaction patterns that can be used to describe tacit co-adaptation in human-robot collaborative contexts. This language can be used to enable communication between collaborating humans and robots in future studies, to let them share what they learned and support them in becoming aware of their implicit adaptations.
Article
Human-machine interface (HMI) techniques use bioelectrical signals to gain real-time synchronised communication between the human body and machine functioning. HMI technology not only provides a real-time control access but also has the ability to control multiple functions at a single instance of time with modest human inputs and increased efficiency. The HMI technologies yield advanced control access on numerous applications such as health monitoring, medical diagnostics, development of prosthetic and assistive devices, automotive and aerospace industry, robotic controls and many more fields. In this paper, various physiological signals, their acquisition and processing techniques along with their respective applications in different HMI technologies have been discussed.
Article
Researchers in post-war industrial laboratories such as Bell Labs and the Smith-Kettlewell Institute pioneered solutions to compensate for sensory loss through so-called sensory substitution systems, premised on an assumption of cortical and sensory plasticity. The article tracks early discussions of plasticity in psychology literature from William James, acknowledged by Wiener, but explicitly developed by Bach-y-Rita and his collaborators. After discussing the conceptual foundations of the principles of sensory substitution, two examples are discussed. First, ‘Project Felix’ was an experiment in vibrotactile communication by means of ‘hearing gloves’ for the deaf at Norbert Wiener’s laboratory at Massachusetts Institute of Technology, demonstrated to Helen Keller in 1950. Second, the tactile-visual sensory substitution system for the blind pioneered by Paul Bach-y-Rita from 1968 onwards. Cumulatively, this article underlines the crucial yet occluded history of research on sensory impairments in the discovery of underlying neurophysiological processes of plasticity and the emergent discourse of neuroplastic subjectivity.
Conference Paper
This paper presents a novel system for tactile feedback integrating advanced sensorized glove and noninvasive stimulation interface. The system comprises a textile glove that integrates 64 sensors capable of capturing distributed tactile signals during usual manual activities, embedded interface electronics, multichannel simulator and non-invasive stimulation interface. Several experiments have been performed to test the capability of the system to capture and to deliver to the user static or dynamic patterns during manual interactions. The experiments demonstrated that the system successfully translated the mechanical interaction into electrotactile profiles within a delay of 32 ms, opening up interesting perspectives for wearable feedback systems for post-stroke rehabilitation.
Chapter
Book link: https://books.google.ca/books?hl=en&lr=&id=ZfRNEAAAQBAJ&oi=fnd&pg=PA3&ots=KvImUNi961&sig=40HuC2lumwM6Ns0SMtVEja-W38g&redir_esc=y#v=onepage&q&f=false The real world is multisensory and our experiences in this world are constructed by the stimulation of all our senses including visual, auditory, touch, olfactory, and taste. However, virtual environments including virtual simulations and serious games, and the human-computer interface more generally, have focused on the visual and auditory senses. Simulating the other senses such as the sense of smell (olfaction) can be beneficial for supporting learning. In this paper we present a simple and cost-effective olfactory interface constructed using an Arduino Uno microcontroller board, a small fan, an off-the-shelf air freshener to deliver a scents to a user of a serious game. A fuzzy logic system regulated the amount of scent delivered to the user based on their distance to the display. As a proof-of-concept, we developed a serious game intended to teach basic math (counting) skills to children. Learners (players) collect pineapples from the scene and then enter the amount of pineapples collected. As the pineapples are collected, a pineapple scent is emitted from the olfactory interface thus serving to supplement or complement the learner’s senses and stimulate their affection and cognition. As part of our proof-of-concept, a 10 year old learner played the game and provided us with feedback regarding the olfactory interface and illustrated the potential of the system.
Chapter
Our understanding of what exactly needs to be protected against in order to safeguard a plausible construal of our ‘freedom of thought’ is changing. And this is because the recent influx of cognitive offloading and outsourcing—and the fast-evolving technologies that enable this—generate radical new possibilities for freedom-of-thought violating thought manipulation. Taking brain-computer interface technologies (BCIs) and the associated possibility of ‘extended’ beliefs as a reference point, I propose and defend a sufficient condition on freedom-of-thought violating (extended) thought manipulation. On the view proposed, the right not to have one’s thoughts or opinions manipulated is violated if one is (i) caused to acquire non-autonomous propositional attitudes (acquisition manipulation) or (ii) caused to have otherwise autonomous propositional attitudes non-autonomously eradicated (eradication manipulation). The implications of this view are then illustrated through four thought experiments, which map on to four distinct ways one’s freedom of thought is plausibly violated.
Article
Sensory substitution is thought to be a promising non-invasive assistive technology for people with complete loss of sight because it provides inaccessible visual information via a preserved modality. However, Sensory Substitution Devices (SSDs) are still rarely used by visually impaired persons, possibly due to a lack of structured and supervised training that could be offered alongside these devices. Here, we developed and evaluated a training program that supports the usage of a recently developed colour-to-sound SSD – the Colorophone. Following our recently proposed theoretical model of SSD development, we propose that this training should help people with complete loss of sight to learn how to efficiently use the device by developing relationships between the components of the user-environment-technology system. We applied systematic case studies combined with a mixed-method approach to evaluate the efficacy of this SSD training program. Five blind users underwent ca. 22 hours of training, divided into four main parts: identification of the users’ individual characteristics and adaptations; sensorimotor training with the device; semi-structured explorations with the device; and evaluation of the training. We demonstrated that this training allows users to successfully acquire a set of skills (i.e., master the sensorimotor contingencies required by the device, develop visual-like perceptual skills, as well as learn about colours) and progress along developmental trajectories (e.g., switch from serial to parallel information processing, recognize more complex colours, increase environment and task complexity). Importantly, we identified individual differences in learning strategies (i.e., sensorimotor vs. metacognitive strategy) that had an impact on the users’ training progress and required the training assistants (TAs) to apply different assistive strategies. Additionally, we described the crucial role of a (non-professional) training assistant in the training progress: this person facilitates the development of relationships between elements of the user-environment-technology system by supporting a metacognitive learning strategy, thereby reducing the risk of abandonment of the SSD. Our study shows the importance for SSD development of well-designed, tailored training, and it provides new insights into the process of SSD-related perceptual learning.
Article
Human–machine interfaces have penetrated various academia and industry fields such as smartphones, robotic, virtual reality, and wearable electronics, due to their abundant functional sensors and information interaction methods. Nevertheless, most sensors’ complex structural design, monotonous parameter detection capability, and single information coding communication hinder their rapid development. As the frontier of self powered sensors, the triboelectric nanogenerator (TENG) has multiple working modes and high structural adaptability, which is a potential solution for multi-parameter sensing and miniaturizing of traditional interactive electronic devices. Herein, a self-powered hybrid coder (SHC) based on TENG is reported to encode two action parameters of touch and press, which can be used as a smart interface for human–machine interaction. The top-down hollow structure of the SHC, not only constructs a compositing mode to generate stable touch and press signals but also builds a hybrid coding platform for generating action codes in synergy mode. When a finger touches or presses the SHC, Morse code and Gray code can be transmitted for text information or remote control of electric devices. This self-powered coder is of reference value for designing an alternative human–machine interface and having the potential to contribute to the next generation of highly integrated portable smart electronics.
Article
What happens when artificial sensors are coupled with the human senses? Using technology to extend the senses is an old human dream, on which sensory substitution and other augmentation technologies have already delivered. Laser tactile canes, corneal implants and magnetic belts can correct or extend what individuals could otherwise perceive. Here we show why accommodating intelligent sensory augmentation devices not just improves but also changes the way of thinking and classifying former sensory augmentation devices. We review the benefits in terms of signal processing and show why non-linear transformation is more than a mere improvement compared to classical linear transformation.
Article
As virtual environments—in the form of videogames and augmented and virtual reality experiences—become more popular, it is important to ensure that they are accessible to all. Previous research has identified echolocation as a useful interaction approach to enable people with visual impairment to access virtual environments. In this paper, we further investigate the usefulness of echolocation to explore virtual environments. We follow a participatory design approach that comprised a focus group session coupled with two fast prototyping and evaluation iterations. During the focus group session, expert echolocators produced a series of seven design recommendations, of which we implemented and trialed four. Our trials revealed that the use of ambient sounds, the ability to place landmarks, directional control, and the ability to use pre-recorded mouth-clicks produced by expert echolocators improved the overall experience of our participants by facilitating the detection of openings and obstacles. The recommendations presented and evaluated in this paper may help to develop virtual environments that support a broader range of users while recognising the value of the lived experience of people with disability as a source of knowledge.
Article
Full-text available
Some philosophers search for the mark of the cognitive: a set of individually necessary and jointly suΜcient conditions identifying all and only the instances of cognition. They claim the mark is necessary to answer diΜcult questions concerning the nature and distribution of cognition. Here, I will argue that, as things stand, given the current landscape of cognitive science, we are not able to identify a mark of the cognitive. I proceed as follows. First, I clarify some factors motivating the search for the mark of the cognitive, thereby highlighting the desiderata the mark is supposed to satisfy. Then, I highlight a tension in the literature over the mark. Given the literature, it is not clear whether the search aims for a mark capturing the intuitive notion of cognition or a genuine scientiΞc kind. I then consider each option in turn, claiming that,either way, no mark satisfying the desiderata can be provided. I then deΟect a foreseeable objection and highlight some implications of my view.
Article
Significance The awareness of the individuals’ biological status is critical for creating interactive environments. Accordingly, we devised a multimodal cryptographic bio-human–machine interface (CB-HMI), which seamlessly translates touch-based entries into encrypted biochemical, biophysical, and biometric indices (i.e., circulating biomarkers levels, heart rate, oxygen saturation level, and fingerprint pattern). As its central component, the CB-HMI features thin hydrogel-coated chemical sensors and a signal interpretation framework to access/interpret biochemical indices, bypassing the challenge of circulating analyte accessibility and the confounding effect of pressing force variability. Upgrading the surrounding objects with CB-HMI, we demonstrated new interactive solutions for driving safety and medication use, where the integrated CB-HMI uniquely enabled one-touch bioauthentication (based on the user’s biological state/identity), prior to rendering the intended services.
Chapter
Electric vehicle being one of the leading green technologies nowadays, is leaving a humongous amount of spent lithium-ion batteries untreated. Current research on lithium-ion battery waste management is at its minimal because the huge power range of the battery is much attractive than the battery waste dismantling process. Treating these battery wastes are crucial for rare metal recovery due to its limited resources on land. Thus, this study aims to propose an eco-design battery pack to ease the recycling process in a more economical and sustainable manner. SolidWorks is used to generate the 3D modelling and ANSYS is utilized to carry out the simulation of the product’s mechanical performance in a drop and impact tests. Results shows that the proposed design of EV battery pack has a design efficiency of one with Easy Fixings indicator of 28%. In the drop test of 0.3 m height, it yields a maximum deformation of 1.015e−3m and a generated Von-Mises stress of 4.827e 8N/m2. Other than that, 2.5227e6 N/m2 of Von-Mises stress is obtained in the impact frontal test. With a great impact of cruising at a speed of 15.6464 m/s, 5.6053e−8 m deformation is obtained in the same test. As a result, the proposed EV battery pack design has showed the potential to improve the sustainability, performance, and ease of disassembly.
Article
Tactile feedback is relevant in a broad range of human–machine interaction systems (e.g. teleoperation, virtual reality and prosthetics). The available tactile feedback interfaces comprise few sensing and stimulation units, which limits the amount of information conveyed to the user. The present study describes a novel technology that relies on distributed sensing and stimulation to convey comprehensive tactile feedback to the user of a robotic end effector. The system comprises six flexible sensing arrays (57 sensors) integrated on the fingers and palm of a robotic hand, embedded electronics (64 recording channels), a multichannel stimulator and seven flexible electrodes (64 stimulation pads) placed on the volar side of the subject’s hand. The system was tested in seven subjects asked to recognize contact positions and identify contact sliding on the electronic skin, using distributed anode configuration (DAC) and single dedicated anode configuration. The experiments demonstrated that DAC resulted in substantially better performance. Using DAC, the system successfully translated the contact patterns into electrotactile profiles that the subjects could recognize with satisfactory accuracy ( i . e . median { IQR } of 88.6 { 11 } % for static and 93.3 { 5 } % for dynamic patterns). The proposed system is an important step towards the development of a high-density human–machine interfacing between the user and a robotic hand. This article is part of the theme issue ‘Advanced neurotechnologies: translating innovation for health and well-being’.
Article
Full-text available
How do people learn to perform tasks that require continuous adjustments of motor output, like riding a bicycle? People rely heavily on cognitive strategies when learning discrete movement tasks, but such time-consuming strategies are infeasible in continuous control tasks that demand rapid responses to ongoing sensory feedback. To understand how people can learn to perform such tasks without the benefit of cognitive strategies, we imposed a rotation/mirror reversal of visual feedback while participants performed a continuous tracking task. We analyzed behavior using a system identification approach which revealed two qualitatively different components of learning: adaptation of a baseline controller and formation of a new, task-specific continuous controller. These components exhibited different signatures in the frequency domain and were differentially engaged under the rotation/mirror reversal. Our results demonstrate that people can rapidly build a new continuous controller de novo and can simultaneously deploy this process with adaptation of an existing controller.
Article
Full-text available
We see with the brain, not the eyes (Bach-y-Rita, 1972); images that pass through our pupils go no further than the retina. From there image information travels to the rest of the brain by means of coded pulse trains, and the brain, being highly plastic, can learn to interpret them in visual terms. Perceptual levels of the brain interpret the spatially encoded neural activity, modified and augmented by nonsynaptic and other brain plasticity mechanisms (Bach-y-Rita, 1972, 1995, 1999, in press). However, the cognitive value of that information is not merely a process of image analysis. Perception of the image relies on memory, learning, contextual interpretation (e. g., we perceive intent of the driver in the slight lateral movements of a car in front of us on the highway), cultural, and other social fac-tors that are probably exclusively human characteristics that provide "qualia" (Bach-y-Rita, 1996b). This is the basis for our tactile vision substitution system (TVSS) studies that, starting in 1963, have demonstrated that visual information and the subjective qualities of seeing can be obtained tactually using sensory sub-stitution systems., The description of studies with this system have been taken
Article
Full-text available
A key issue in developmental neuroscience is the role of activity-dependent mechanisms in the epigenetic induction of functional organization in visual cortex. Ocular blindness and ensuing visual deprivation is one of the rare models available for the investigation of experience-dependent cortical reorganization in man. In a PET study we demonstrate that congenitally blind subjects show task-specific activation of extrastriate visual areas and parietal association areas during Braille reading, compared with auditory word processing. In contrast, blind subjects who lost their sight after puberty show additional activation in the primary visual cortex with the same tasks. Studies in blind-raised monkeys show that crossmodal responses in extrastriate areas can be elicited by somatosensory stimulation. This is consistent with the crossmodal extrastriate activations elicited by tactile processing in our congenitally blind subjects. Since primary visual cortex does not show crossmodal responses in primate studies, the differential activation in late and congenitally blind subjects highlights the possibility of reciprocal activation by visual imagery in subjects with early visual experience.
Article
Full-text available
Dans la première partie de l'article, j'esquisse l'espace de jeu philosophique dans lequel s'est inscrit au XVIIIe siècle la problème de Molyneux, en prenant pour point de référence la réponse de Berkeley. Je soutiens que ce qui est essentiel dans le problème de Molyneux n'est pas le débat entre innéisme et empirisme, mais la question de savoir si la perception spatiale est modale et corrélativement si la perception visuelle est intrinsèquement spatiale ou si la motricité joue un rôle déterminant dans la construction de représentations spatiales de type visuel. J'examine ensuite certains développements récents concernant le problème de Molyneux et tout particulièrement certains travaux de Paul Bach-y-Rita sur la substitution visuo-tactile. Ceci me permet de mettre en évidence une différence importante entre les deux problématiques, à savoir une modification radicale de la conception des rapports entre perception et sensation. L'expérience de Bach-y-Rita illustre la possibilité d'une indépendance des sensations et des perceptions visuelles. Nos perceptions visuelles n'ont pas à se fonder sur des sensations visuelles. Ce divorce des sensations et des perceptions rend caduque ainsi l'une des principales stratégies argumentatives du XVIIIème siècle: qu'il n'y ait rien de commun entre sensations visuelles et sensations tactiles n'implique pas que les idées spatiales attachées aux unes ne puissent l'être aux autres. En d'autres termes, le caractère modal des sensations n'implique nullement, par lui-même, le caractère modal des perceptions. Le débat contemporain se distingue encore du débat dix-huitièmiste par une insistance beaucoup plus nette sur le caractère actif de la perception, par opposition au caractère passif de la sensation. Ceci rend possible une redéfinition du rôle épistémique du mouvement dans la perception. J'essaye de montrer que le rôle attribué au mouvement dans la perception par Bach-y-Rita correspond à ce que Gibson appelle mouvement exploratoire et ne présuppose pas une représentation spatiale du mouvement. Insister sur le rôle du mouvement dans la perception visuelle ne revient donc pas à introduire en contrebande une spatialité déjà constituée. En revanche, l'on doit noter que l'espace purement visuel est à la fois un espace non calibré, sans distance absolue, et un espace sans garantie d'unicité. La question de savoir si la perception visuelle nous fournit une représentation objective de l'espace est donc subordonnée à la question de savoir si caractère absolu et unicité sont des ingrédients nécessaires d'une représentation objective de l'espace. Si l'on tient qu'une représentation n'est proprement spatiale que pour autant que l'espace représenté l'est comme absolu, alors une telle représentation ne peut nous être donnée par la perception seule, quelle que soit la modalité considérée, mais dépend d'un couplage perception-action. Si l'on tient que l'unicité est un ingrédient nécessaire, alors elle dépend de l'unité d'un moi sentant définie par la possibilité de l'exercice simultané de plusieurs sens et par la possibilité de produire des mouvements induisant des transformations dans les stimuli d'au moins deux modalités perceptives.
Article
Full-text available
Representation of locomotor space by early- and late-blind subjects and by blindfolded sighted subjects was studied within a perimeter where the direction and distance of landmarks had to be located. Subjects were guided along routes to be explored, both with and without the use of an ultrasonic echolocating prosthesis that enabled object localization. Without the prosthesis, early-blind subjects’ performance was worse than that of visually experienced subjects, both in direction and in distance assessments. With the help of the prosthesis, early- and late-blind subjects’ performance improved, especially in distance assessments; late-blinds’ performance remained better than that of early-blinds. These results suggest that early-blinds’ spatial representation would be the most impaired on routes requiring the mastering of euclidean concepts.
Article
Full-text available
To explore the neural networks used for Braille reading, we measured regional cerebral blood flow with PET during tactile tasks performed both by Braille readers blinded early in life and by sighted subjects. Eight proficient Braille readers were studied during Braille reading with both right and left index fingers. Eight-character, non-contracted Braille-letter strings were used, and subjects were asked to discriminate between words and non-words. To compare the behaviour of the brain of the blind and the sighted directly, non-Braille tactile tasks were performed by six different blind subjects and 10 sighted control subjects using the right index finger. The tasks included a non-discrimination task and three discrimination tasks (angle, width and character). Irrespective of reading finger (right or left), Braille reading by the blind activated the inferior parietal lobule, primary visual cortex, superior occipital gyri, fusiform gyri, ventral premotor area, superior parietal lobule, cerebellum and primary sensorimotor area bilaterally, also the right dorsal premotor cortex, right middle occipital gyrus and right prefrontal area. During non-Braille discrimination tasks, in blind subjects, the ventral occipital regions, including the primary visual cortex and fusiform gyri bilaterally were activated while the secondary somatosensory area was deactivated. The reverse pattern was found in sighted subjects where the secondary somatosensory area was activated while the ventral occipital regions were suppressed. These findings suggest that the tactile processing pathways usually linked in the secondary somatosensory area are rerouted in blind subjects to the ventral occipital cortical regions originally reserved for visual shape discrimination.
Article
Full-text available
The purpose of this study was to investigate the neural networks involved when using an ultrasonic echolocation device, which is a substitution prosthesis for blindness through audition. Using positron emission tomography with fluorodeoxyglucose, regional brain glucose metabolism was measured in the occipital cortex of early blind subjects and blindfolded controls who were trained to use this prosthesis. All subjects were studied under two different activation conditions: (i) during an auditory control task, (ii) using the ultrasonic echolocation device in a spatial distance and direction evaluation task. Results showed that the abnormally high metabolism already observed in early blind occipital cortex at rest [C. Veraart, A.G. De Volder, M.C. Wanet-Defalque, A. Bol, C. Michel, A.M. Goffinet, Glucose utilization in human visual cortex is, respectively elevated and decreased in early versus late blindness, Brain Res. 510 (1990) 115-121.] was also present during the control task and showed a trend to further increase during the use of the ultrasonic echolocation device. This specific difference in occipital cortex activity between the two tasks was not observed in control subjects. The metabolic recruitment of the occipital cortex in early blind subjects using a substitution prosthesis could reflect a concurrent stimulation of functional cross-modal sensory connections. Given the unfamiliarity of the task, it could be interpreted as a prolonged plasticity in the occipital cortex early deprived of visual afferences.
Article
Full-text available
Form perception with the tongue was studied with a 49-point electrotactile array. Five sighted adult human subjects (3M/2F) each received 4 blocks of 12 tactile patterns, approximations of circles, squares, and vertex-up equilateral triangles, sized to 4x4, 5x5, 6x6, and 7x7 electrode arrays. Perception with electrical stimulation of the tongue is better than with fingertip electrotactile stimulation, and the tongue requires 3% (5-15 V) of the voltage. The mean current for tongue subjects was 1.612 mA. Tongue shape recognition performance across all sizes was 79.8%. The approximate dimensions of the electrotactile array and the dimensions of compartments built into dental retainers have been determined. The goal is to develop a practical, cosmetically acceptable, wireless system for blind persons, with a miniature TV camera, microelectronics, and FM transmitter built into a pair of glasses, and the electrotactile array in a dental orthodontic retainer.
Article
Full-text available
Electrotactile (electrocutaneous) stimulation at currents greater than sensation threshold causes sensory adaptation, which temporarily raises the sensation threshold and reduces the perceived magnitude of stimulation. After 15 min of moderately intense exposure to a conditioning stimulus (10 s on, 10 s off), the sensation threshold elevation for seven observers was 60-270%, depending on the current, frequency, and number of pulses in the burst structure of the conditioning stimulus. Increases in any of these parameters increased the sensation threshold elevation. Adaptation and recovery were each complete in approximately 15 min.
Article
Full-text available
100-channel neurostimulation circuit comprising a complementary metal oxide semiconductor (CMOS), application-specific integrated circuit (ASIC) has been designed, constructed and tested. The ASIC forms a significant milestone and an integral component of a 100-electrode neurostimulation system being developed by the authors. The system comprises an externally worn transmitter and a body implantable stimulator. The purpose of the system is to communicate both data and power across tissue via radio-frequency (RF) telemetry such that externally programmable, constant current, charge balanced, biphasic stimuli may be delivered to neural tissue at 100 unique sites. An intrinsic reverse telemetry feature of the ASIC has been designed such that information pertaining to the device function, reconstruction of the stimulation voltage waveform, and the measurement of impedance may be obtained through noninvasive means. To compensate for the paucity of data pertaining to the stimulation thresholds necessary in evoking a physiological response, the ASIC has been designed with scaleable current output. The ASIC has been designed primarily as a treatment of degenerative disorders of the retina whereby the 100 channels are to be utilized in the delivery of a pattern of stimuli of varying intensity and or duty cycle to the surviving neural tissue of the retina. However, it is conceivable that other fields of neurostimulation such as cochlear prosthetics and functional electronic stimulation may benefit from the employment of the system.
Article
Full-text available
Electronic travel aids (ETAs) for the blind commonly employ conventional time-of-flight sonars to provide range measurements, but their wide beams prevent accurate determination of object bearing. We describe a binaural sonar that detects objects over a wider bearing interval compared with a single transducer and also determines if the object lies to the left or right of the sonar axis in a robust manner. The sonar employs a pair of Polaroid 6500 ranging modules connected to Polaroid 7000 transducers operating simultaneously in a binaural array configuration. The sonar determines which transducer detects the echo first. An outward vergence angle between the transducers improves the first-echo detection reliability by increasing the delay between the two detected echoes, a consequence of threshold detection. We exploit this left/right detection capability in an ETA that provides vibrotactile feedback. Pager motors mount on both sides of the sonar, possibly worn on the user's wrists. The motor on the same side as the reflecting object vibrates with speed inversely related to range. As the sonar or object moves, vibration patterns provide landmark, motion and texture cues. Orienting the sonar at 45 degrees relative to the travel direction and passing a right-angle corner produces a characteristic vibrational pattern. When pointing the sonar at a moving object, such as a fluttering flag, the motors alternate in a manner to give the user a perception of the object motion. When the sonar translates or rotates to scan a foliage surface, the vibrational patterns are related to the surface scatterer distribution, allowing the user to identify the foliage.
Article
Full-text available
A brain model is proposed which describes its structural organization and the related functions as compartments organized in time and space. On a molecular level the negative feedback loops of clock-controlled genes are interpreted as compartments. This spatio-temporal operational principle may also work on the cellular level as glial-neuronal interactions, wherein glia have a spatio-temporal boundary setting function. The synchronization of the multi-compartmental operations of the brain is compared to the harmonization in a symphony and appears as an integrated behavior of the whole organism, defined as modes of behavior. For explanation of the principle of harmonization, an example from Schubert's Symphony No. 8 has been chosen. While harmonization refers to the synchronization of diverse systems, it seems appropriate to select the brain of a composer and the structure of musical composition as a paradigm towards a glial-neuronal brain theory. Finally, some limitations of experimental brain research are discussed and robotics are proposed as a promising alternative.
Article
Full-text available
A paradigm is described for recording the activity of single cortical neurons from awake, behaving macaque monkeys. Its unique features include high-density microwire arrays and multichannel instrumentation. Three adult rhesus monkeys received microwire array implants, totaling 96-704 microwires per subject, in up to five cortical areas, sometimes bilaterally. Recordings 3-4 weeks after implantation yielded 421 single neurons with a mean peak-to-peak voltage of 115 +/- 3 microV and a signal-to-noise ratio of better than 5:1. As many as 247 cortical neurons were recorded in one session, and at least 58 neurons were isolated from one subject 18 months after implantation. This method should benefit neurophysiological investigation of learning, perception, and sensorimotor integration in primates and the development of neuroprosthetic devices.
Article
Full-text available
The human postural coordination mechanism is an example of a complex closed-loop control system based on multisensory integration [9,10,13,14]. In models of this process, sensory data from vestibular, visual, tactile and proprioceptive systems are integrated as linearly additive inputs that drive multiple sensory-motor loops to provide effective coordination of body movement, posture and alignment [5-8, 10, 11]. In the absence of normal vestibular (such as from a toxic drug reaction) and other inputs, unstable posture occurs. This instability may be the result of noise in a functionally open-loop control system [9]. Nonetheless, after sensory loss the brain can utilize tactile information from a sensory substitution system for functional compensation [1-4, 12]. Here we have demonstrated that head-body postural coordination can be restored by means of vestibular substitution using a head-mounted accelerometer and a brain-machine interface that employs a unique pattern of electrotactile stimulation on the tongue. Moreover, postural stability persists for a period of time after removing the vestibular substitution, after which the open-loop instability reappears.
Conference Paper
Full-text available
An electrocutaneous display system composed of three layers is implemented for augmentation of skin sensation. The first layer has electrodes on the front side of a thin plate, the second has optical sensors on the reverse side of the plate, and the third is a thin film force sensor between the other two layers. Visual images captured by the sensor are translated into tactile information, and displayed through electrical stimulation. Thus, visual surface information can be perceived through the skin while natural tactile sensation is unhindered. Based on the sensor the user can "touch" other modalities of surface information as well.
Conference Paper
Full-text available
We have developed a tactile display that uses electric current from the skin surface as a stimulus. Our main objective was to independently stimulate a variety of mechanoreceptors and to generate specific sensations by combining stimuli. The key to this goal is selective nerve stimulation. In this paper, a mathematical framework is build for the general design of the selective stimulation. The geometries of electrodes and nerve fibers are arbitrary, and the waveform of the electric current from each electrode is independently controlled. Furthermore, the problem is formulated by linear or quadratic programming, which provides an optimal solution
Article
Full-text available
Two studies were conducted to determine the effect of stimulation current on pattern perception on a 49-point fingertip-scanned electrotactile (electrocutaneous) display. Performance increased monotonically from near chance levels at the lowest subthreshold current levels tested to approximately 90% at the highest comfortable current levels. This suggests the existence of a tradeoff between spatial performance and usable "gray scale" range in electrotactile presentation of graphical information.
Article
Full-text available
The effect of stimulation waveform on pattern perception was investigated on a 49-point fingertip-scanned electrotactile (electrocutaneous) display. Waveform variables burst frequency (F), number of pulses per burst (NPB), and pulse repetition rate (PRR) were varied in a factorial design. Contrast reduction was used to limit performance of perceiving a 1-tactor gap defined within a 3 × 3 tactor outline square. All three variables accounted for significant variations in performance with higher levels of F and NPB and lower levels of PRR, leading to better performance. In addition, we collected qualitative data on each waveform, and the qualitative differences were related to performance (e.g., waveforms perceived as having a more localized sensation were correlated with better pattern identification performance than those waveforms perceived as more broad). We also investigated the effect of stimulation contrast on pattern perception.
Article
Full-text available
An experimental system for the conversion of images into sound patterns was designed to provide auditory image representations within some of the known limitations of the human hearing systems possibly as a step towards the development of a vision substitution device for the blind. The application of an invertible (one-to-one) image-to-sound mapping ensures the preservation of visual information. The system implementation involves a pipelined special-purpose computer connected to a standard television camera. A novel design and the use of standard components have made for a low-cost portable prototype conversion system with a power dissipation suitable for battery operation. Computerized sampling of the system output and subsequent calculation of the approximate inverse (sound-to-image) mapping provided the first convincing experimental evidence for the preservation of visual information in sound representations of complicated images.
Article
The problem of blind source separation of instantaneous mixtures was studied, which is to recover the unknown sources from observations without knowing the characteristics of the transmission medium, by using third- and fourth-order cumulants. A class of criteria are presented, which state that for independent sources, it is sufficient for separation if there is no more than one Gaussian. Algorithms are also developed based on these criteria, whose efficiency is verified by experiments.
Article
A key issue in developmental neuroscience is the role of activity-dependent mechanisms in the epigenetic induction of functional organization in visual cortex. Ocular blindness and ensuing visual deprivation is one of the rare models available for the investigation of experience-dependent cortical reorganization in man. In a PET study we demonstrate that congenitally blind subjects show task-specific activation of extrastriate visual areas and parietal association areas during Braille reading, compared with auditory word processing. In contrast, blind subjects who lost their sight after puberty show additional activation in the primary visual cortex with the same tasks. Studies in blind-raised monkeys show that crossmodal responses in extrastriate areas can be elicited by somatosensory stimulation. This is consistent with the crossmodal extrastriate activations elicited by tactile processing in our congenitally blind subjects. Since primary visual cortex does not show crossmodal responses in primate studies, the differential activation in late and congenitally blind subjects highlights the possibility of reciprocal activation by visual imagery in subjects with early visual experience.
Article
To explore the neural networks used for Braille reading, we measured regional cerebral blood flow with PET during tactile tasks performed both by Braille readers blinded early in life and by sighted subjects. Eight proficient Braille readers were studied during Braille reading with both right and left index fingers. Eight-character, non-contracted Braille-letter strings were used, and subjects were asked to discriminate between words and non-words. To compare the behaviour of the brain of the blind and the sighted directly, non-Braille tactile tasks were performed by six different blind subjects and 10 sighted control subjects using the right index finger. The tasks included a non-discrimination task and three discrimination tasks (angle, width and character). Irrespective of reading finger (right or left), Braille reading by the blind activated the inferior parietal lobule, primary visual cortex, superior occipital gyri, fusiform gyri, ventral premotor area, superior parietal lobule, cerebellum and primary sensorimotor area bilaterally, also the right dorsal premotor cortex, right middle occipital gyrus and right prefrontal area. During non-Braille discrimination tasks, in blind subjects, the ventral occipital regions, including the primary visual cortex and fusiform gyri bilaterally were activated while the secondary somatosensory area was deactivated. The reverse pattern was found in sighted subjects where the secondary somatosensory area was activated while the ventral occipital regions were suppressed. These findings suggest that the tactile processing pathways usually linked in the secondary somatosensory area are rerouted in blind subjects to the ventral occipital cortical regions originally reserved for visual shape discrimination.
Chapter
The average adult has approximately 2m2 of skin (Gibson, 1968), about 90% hairy, and remainder smooth or glabrous. Although the glabrous areas are more sensitive than the hairy, both types are highly innervated with sensory receptors and nerves (Sinclair, 1981). Tactile displays have utilized both glabrous and hairy skin, the type selected being relative to the sensory display needs of the various investigators. There are several advantages for selecting the skin as the sensory surface to receive information. (1) It is accessible, extensive in area, richly innervated, and capable of precise discrimination. Further, when the skin of the forehead or trunk is used, the tactile display system does not interfere materially with motor or other sensory functions. (2) The skin shows a number of functional similarities to the retina of the eye in its capacity to mediate information. Large parts of the body surface are relatively flat, and the receptor surfaces of the skin, like the retina, are capable of mediating displays in two spatial dimensions as well as having the potential for temporal integration (summation over time). Thus, there is generally no need for complex topological transformation or for temporal coding of pictorial information for direct presentation onto the accessible areas of the skin, although temporal display factors have been explored with the goal of transmitting spatial information across the skin more quickly than is possible with present systems (Kaczmarek et al., 1984; Bach-y-Rita and Hughes, 1985; Kaczmarek et al., 1985; Loomis and Lederman, 1986). Spatial patterns learned visually can be identified factually, and vice versa (Epstein et al., 1989; Hughes et al., 1990). (3) Certain types of sensory inhibition, including the Mach band phenomenon and other examples of lateral inhibition originally demonstrated for vision, are equally demonstrable in the skin (Bekesy, 1967). (4) Finally, there is evidence that the skin normally functions as an exteroceptor at least in a limited sense: Katz noted that to some extent both vibration and temperature changes can be felt at a distance (Krueger, 1970). For example, a blind person can “feel” the approach of a warm cylinder at three times the distance required by the sighted individual (Krueger, 1970).
Article
At its substratum, brain/mind organization requires both synaptic firings and non-synaptic events. Synaptic firings organize the pattern of non-synaptic events. Non-synaptic events organize the pattern of synaptic firings. The processes are related in a bizarre hierarchy. Comparing these processes to electric circuits, it is as if we have two circuits that each continuously and simultaneously update the topology, and consequently, the dynamical laws of the other. Since either can be seen to be rebuilding the other, from its own perspective each process appears higher than the other in a hierarchy. This same kind of hierarchy is found in a hyperset structure. Interpreted as a directed graph, the nodes in a hyperset form a hierarchy in which, from the perspective of any node in the hierarchy, that node is at the top. This organizational structure violates the Foundation Axiom. Algorithmic computation strictly complies with the Foundation Axiom. Thus, an algorithm organized like a hyperset is a contradiction in terms. Does this contradiction mean are we precluded forever from implementing brain-like activities artificially? Not at all! An algorithm is incapable of doing the job, but nothing prevents us from constructing interacting analog processes that update each other's dynamical laws on the fly.
Article
Sensory ,systems ,are ,associated ,with ,motor ,systems ,for perception. In the absence of motor ,control over the orientation of the sensory input, a person may have no idea from where the information is coming, and thus no ability to locate it in space. Sensory substitution studies have demonstrated ,that the sensory part of a ,sensory-motor loop can be provided by artificial receptors leading to a brain-machine interface (BMI). We now,propose that the motor component,of the sensory-motor coupling can be replaced,by a,“virtual” movement. We suggest,that it is possible to progress to the point where predictable movement, not observed except for some sign of its initiation, could be imagined and by that means,the mental image of movement,could substitute for the motor component of the loop. We further suggest that, due to the much faster information transmission of the skin than the eye, innovative information presentation, such as fast sequencing and time division multiplexing can beused,to partially compensate,for the relatively small number,of tactile stimulus points in the BMI. With such a system, incorporating humans-in- the-loop for industrial applications could result in increased efficiency and humanization,of tasks that presently are highly stressful. Key words: sensori-motor loop, spatial localisation, movement, mental
Article
We introduce a distinction between cortical dominance andcortical deference, and apply it to various examples ofneural plasticity in which input is rerouted intermodally orintramodally to nonstandard cortical targets. In some cases butnot others, cortical activity `defers' to the nonstandard sourcesof input. We ask why, consider some possible explanations, andpropose a dynamic sensorimotor hypothesis. We believe that thisdistinction is important and worthy of further study, bothphilosophical and empirical, whether or not our hypothesis turnsout to be correct. In particular, the question of how the distinction should be explained is linked to explanatory gapissues for consciousness. Comparative and absolute explanatorygaps should be distinguished: why does neural activity in aparticular area of cortex have this qualitative expressionrather than that, and why does it have any qualitativeexpression at all? We use the dominance/deference distinction toaddress the comparative gaps, both intermodal and intramodal (notthe absolute gap). We do so not by inward scrutiny but rather by expanding our gaze to include relations between brain, body andenvironment.
Article
A system for converting an optical image into a tactile display has been evaluated to see what promise it has as a visual substitution system. After surprisingly little training, Ss are able to recognize common objects and to describe their arrangement in three-dimensional space. When given control of the sensing and imaging device, a television camera, Ss quickly achieve external subjective localization of the percepts. Limitations of the system thus far appear to be more a function of display resolution than limitations of the skin as a receptor surface. The acquisition of skill with the device has been remarkably similar for blind and sighted Ss.
Article
1. Multiple microelectrode maps of the hand representation within and across the borders of cortical area 3b were obtained before, immediately after, or several weeks after a period of behaviorally controlled hand use. Owl monkeys were conditioned in a task that produced cutaneous stimulation of a limited sector of skin on the distal phalanges of one or more fingers. 2. Analysis of microelectrode mapping experiment data revealed that 1) stimulated skin surfaces were represented over expanded cortical areas. 2) Most of the cutaneous receptive fields recorded within these expanded cortical representational zones were unusually small. 3) The internal topography of representation of the stimulated and immediately surrounding skin surfaces differed greatly from that recorded in control experiments. Representational discontinuities emerged in this map region, and "hypercolumn" distances in this map sector were grossly abnormal. 4) Borders between the representations of individual digits and digit segments commonly shifted. 5) The functionally defined rostral border of area 3b shifted farther rostralward, manifesting either an expansion of the cutaneous area 3b fingertip representation into cortical field 3a or an emergence of a cutaneous input zone in the caudal aspect of this normally predominantly deep-receptor representational field. 6) Significant lateralward translocations of the borders between the representations of the hand and face were recorded in all cases. 7) The absolute locations--and in some cases the areas or magnifications--of representations of many skin surfaces not directly involved in the trained behavior also changed significantly. However, the most striking areal, positional, and topographic changes were related to the representations of the behaviorally stimulated skin in every studied monkey. 3. These experiments demonstrate that functional cortical remodeling of the S1 koniocortical field, area 3b, results from behavioral manipulations in normal adult owl monkeys. We hypothesize that these studies manifest operation of the basic adaptive cortical process(es) underlying cortical contributions to perception and learning.
Article
WE describe here a vision substitution system which is being developed as a practical aid for the blind and as a means of studying the processing of afferent information in the central nervous system. The theoretical neurophysiological basis1 and the physical concept of the instrumentation2 have been discussed previously, and results obtained with preliminary models have been briefly reported3. A detailed description of the apparatus will appear elsewhere (manuscript in preparation).
Article
The rehabilitation of blindness, using noninvasive methods, requires sensory substitution. A theoretical frame for sensory substitution has been proposed which consists of a model of the deprived sensory system connected to an inverse model of the substitutive sensory system. This paper addresses the feasibility of this conceptual model in the case of auditory substitution, and its implementation as a rough model of the retina connected to an inverse linear model of the cochlea. We have developed an experimental prototype. It aims at allowing optimization of the sensory substitution process. This prototype is based on a personal computer which is connected to a miniature head-fixed video camera and to headphones. A visual scene is captured. Image processing achieves edge detection and graded resolution. Each picture element (pixel) of the processed image is assigned a sinusoidal tone; weighted summation of these sinewaves builds up a complex auditory signal which is transduced by the headphones. On-line selection of various parameters and real-time functioning of the device allow optimization of parameters during psychophysical experimentations. Assessment of this implementation has been initiated, and has so far demonstrated prototype usefulness for pattern recognition. An integrated circuit of this system is to be developed.
Article
Recognition tasks of simple visual patterns have been used to assess an early visual--auditory sensory-substitution system, consisting of the coupling of a rough model of the human retina with an inverse model of the cochlea, by means of a pixel-frequency relationship. The potential advantage of the device, compared with previous ones, is to give the blind the ability to both localise and recognise visual patterns. Four evaluation sessions assessed the performance of twenty-four blindfolded sighted subjects using the device. Subjects had to recognise twenty-five visual patterns, one at a time, using a head-mounted small camera and interpreting the corresponding sounds given by the device. Half the subjects were trained by means of a correction feedback procedure during ten one-hour training sessions embedded in between the evaluation sessions. Results revealed extremely successful training effects. Performance of trained subjects significantly increased with practice compared with the untrained control group. The improvement was also observed for new patterns, demonstrating a learning-process generalisation. The negative correlation observed between scores and processing time showed that the subjects' response accuracy was related to their speed. In conclusion, simple pattern recognition is possible with a fairly natural vision-to-audition coding scheme, given the possibility for the subjects to have sensory--motor interactions while using the device.
Article
This PET study aimed at investigating the neural structures involved in pattern recognition in early blind subjects using sensory substitution equipment (SSE). Six early blind and six blindfolded sighted subjects were studied during three auditory processing tasks: a detection task with noise stimuli, a detection task with familiar sounds, and a pattern recognition task using the SSE. The results showed a differential activation pattern with the SSE as a function of the visual experience: in addition to the regions involved in the recognition process in sighted control subjects, occipital areas of early blind subjects were also activated. The occipital activation was more important when the early blind subjects used SSE than during the other auditory tasks. These results suggest that activity of the extrastriate visual cortex of early blind subjects can be modulated and bring additional evidence that early visual deprivation leads to cross-modal cerebral reorganization.
Article
The 'visual' acuity of blind persons perceiving information through a newly developed human-machine interface, with an array of electrical stimulators on the tongue, has been quantified using a standard Ophthalmological test (Snellen Tumbling E). Acuity without training averaged 20/860. This doubled with 9 h of training. The interface may lead to practical devices for persons with sensory loss such as blindness, and offers a means of exploring late brain plasticity.
Article
Previous neuroimaging studies identified a large network of cortical areas involved in visual imagery in the human brain, which includes occipitotemporal and visual associative areas. Here we test whether the same processes can be elicited by tactile and auditory experiences in subjects who became blind early in life. Using positron emission tomography, regional cerebral blood flow was assessed in six right-handed early blind and six age-matched control volunteers during three conditions: resting state, passive listening to noise sounds, and mental imagery task (imagery of object shape) triggered by the sound of familiar objects. Activation foci were found in occipitotemporal and visual association areas, particularly in the left fusiform gyrus (Brodmann areas 19-37), during mental imagery of shape by both groups. Since shape imagery by early blind subjects does involve similar visual structures as controls at an adult age, it indicates their developmental crossmodal reorganization to allow perceptual representation in the absence of vision.
Article
Many current neurophysiological, psychophysical, and psychological approaches to vision rest on the idea that when we see, the brain produces an internal representation of the world. The activation of this internal representation is assumed to give rise to the experience of seeing. The problem with this kind of approach is that it leaves unexplained how the existence of such a detailed internal representation might produce visual consciousness. An alternative proposal is made here. We propose that seeing is a way of acting. It is a particular way of exploring the environment. Activity in internal representations does not generate the experience of seeing. The outside world serves as its own, external, representation. The experience of seeing occurs when the organism masters what we call the governing laws of sensorimotor contingency. The advantage of this approach is that it provides a natural and principled way of accounting for visual consciousness, and for the differences in the perceived quality of sensory experience in the different sensory modalities. Several lines of empirical evidence are brought forward in support of the theory, in particular: evidence from experiments in sensorimotor adaptation, visual "filling in," visual stability despite eye movements, change blindness, sensory substitution, and color perception.
Article
As long as 150 years ago, when Fritz and Hitzig demonstrated the electrical excitability of the motor cortex, scientists and fiction writers were considering the possibility of interfacing a machine with the human brain. Modern attempts have been driven by concrete technological and clinical goals. The most advanced of these has brought the perception of sound to thousands of deaf individuals by means of electrodes implanted in the cochlea. Similar attempts are underway to provide images to the visual cortex and to allow the brains of paralyzed patients to re-establish control of the external environment via recording electrodes. This review focuses on two challenges: (1) establishing a 'closed loop' between sensory input and motor output and (2) controlling neural plasticity to achieve the desired behavior of the brain-machine system. Meeting these challenges is the key to extending the impact of the brain-machine interface.
Article
Synaptic communication, nonsynaptic diffusion neurotransmission and glial activity each update the morphology of the other two. These interactions lead to an endogenous structure of causal entailment. It has internal ambiguities rendering it incomputable. The entailed effects are bizarre. These include abduction of novelty in response to conflicting cues, a resolution of the seeming conflict between freewill and determinism, and anticipatory behavior. Such inherent ambiguity of the causal entailment structure does not preclude the implementation of brain-like activities artificially. Although an algorithm is incapable of neuromimetically reproducing self-referential character of the brain, there is a currently-feasible strategy for wiring a "human in the loop" to use the cognitive powers of anticipation and unconscious integration to provide dramatic improvement in the operation of large engineered systems.
Conference Paper
In hard computing for engineering applications, we use explicit models derived from physical principle, and implement them on a computer as purely syntactic Turing-equivalent structures. When such a direct attack is not feasible, we resort to soft computing, using the techniques arising from artificial intelligence to ferret out the secrets of a process based on implicit models derived from observed data. Again, we implement them on a computer as purely syntactic Turing-equivalent structures. As the interest of the engineers moves toward problems in biomedical engineering and human-machine interaction, it is apparent that there are problems intractable even by the methods of soft computing. Processes of life and mind include internal semantics including inherent semantic ambiguity that are indispensable to their operation, but these semantics are totally missed by the purely syntactical strategies of both hard and soft computing. For engineers to make responsible decisions about systems that involve naturally occurring processes of life and mind, a new modeling strategy is required. It needs semantics models that can account for internal ambiguity and has so high a degree of flexibility that we may think of it as softer that "soft computing".
Bizarre hierarchy of brain function Softer than soft computing Functional reorganization of primary somatosensory cortex in adult owl monkeys after behaviorally controlled tactile stimulation
  • S W Kercel
Kercel, S.W. et al. (2003) Bizarre hierarchy of brain function. In Intelligent Computing: Theory and Applications (Priddy, K.L. and Angeline, P.J., eds), Proceedings of SPIE (Vol. 5103) pp. 150 – 161, SPIE 21 Kercel, S.W. (2003) Softer than soft computing. In Proceedings of the 2003 IEEE International Workshop on Soft Computing in Industrial Applications (Kercel, S.W., ed.), pp. 27 – 32, IEEE 22 Jenkins, W.M. et al. (1990) Functional reorganization of primary somatosensory cortex in adul