[Show abstract][Hide abstract] ABSTRACT: Despite recent advances in harnessing cortical motor-related activity to control computer cursors and robotic devices, the ability to decode and execute different grasping patterns remains a major obstacle. Here we demonstrate a simple Bayesian decoder for real-time classification of grip type and wrist orientation in macaque monkeys that uses higher-order planning signals from anterior intraparietal cortex (AIP) and ventral premotor cortex (area F5). Real-time decoding was based on multiunit signals, which had similar tuning properties to cells in previous single-unit recording studies. Maximum decoding accuracy for two grasp types (power and precision grip) and five wrist orientations was 63% (chance level, 10%). Analysis of decoder performance showed that grip type decoding was highly accurate (90.6%), with most errors occurring during orientation classification. In a subsequent off-line analysis, we found small but significant performance improvements (mean, 6.25 percentage points) when using an optimized spike-sorting method (superparamagnetic clustering). Furthermore, we observed significant differences in the contributions of F5 and AIP for grasp decoding, with F5 being better suited for classification of the grip type and AIP contributing more toward decoding of object orientation. However, optimum decoding performance was maximal when using neural activity simultaneously from both areas. Overall, these results highlight quantitative differences in the functional representation of grasp movements in AIP and F5 and represent a first step toward using these signals for developing functional neural interfaces for hand grasping.
The Journal of Neuroscience : The Official Journal of the Society for Neuroscience 10/2011; 31(40):14386-98. DOI:10.1523/JNEUROSCI.2451-11.2011 · 6.34 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: In the past decade the field of neural interface systems has enjoyed an increase in attention from the scientific community and the general public, in part due to the enormous potential that such systems have to increase the quality of life for paralyzed patients. While significant progress has been made, serious challenges remain to be addressed from both biological and engineering perspectives. A key issue is how to optimize the decoding of neural information, such that neural signals are correctly mapped to effectors that interact with the outside world - like robotic hands and limbs or the patient's own muscles. Here we present some recent progress on tackling this problem by applying the latest developments in machine learning. Neural data was collected from macaque monkeys performing a real-time hand grasp decoding task. Signals were recorded via chronically implanted electrodes in the anterior intraparietal cortex (AIP) and ventral premotor cortex (F5), brain areas that are known to be involved in the transformation of visual signals into hand grasping instructions. We present a comparative study of different classical machine learning methods with an application of decoding of hand postures, as well as a new approach for more robust decoding. Results suggests that combining data-driven algorithmic approaches with well-known parametric methods could lead to better performing and more robust learners, which may have direct implications for future clinical devices.
Conference proceedings: ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Conference 08/2010; 2010:4172-5. DOI:10.1109/IEMBS.2010.5627386
[Show abstract][Hide abstract] ABSTRACT: A brain machine interface (BMI) for visually guided grasping would provide significant benefits for paralyzed patients, given the crucial role these movements play in our everyday life. We have developed a BMI to decode grasp shape in real-time in macaque monkeys. Neural activity was evaluated using chronically implanted elec-trodes in the anterior intraparietal cortex (AIP) and ventral premotor cortex (F5), areas that are known to be in-volved in the transformation of visual signals into hand grasping instructions. In a first study, we decoded two grasp types (power and precision grip) and three grasp orientations (target oriented vertically or tilted left or right) from the neural activity during movement planning with an accuracy of about 70%. These results are proof-of-concept for a BMI for visually guided grasping that could be extended for larger number of grip types and grip orientations, as needed for prosthetic applications in humans.
[Show abstract][Hide abstract] ABSTRACT: Many biological activities take place through the physicochemical interaction of two molecules. This interaction occurs when one of the molecules finds a suitable location on the surface of the other for binding. This process is known as molecular docking and it has applications to drug design. If we can determine which drug molecule binds to a particular protein and how the protein interacts with the bonded molecule, we can possibly enhance or inhibit its activities. This information, in turn, can be used to develop new drugs that are more effective against diseases. In this paper, we propose a new approach based on human-computer interaction paradigm for the solution of rigid-body molecular docking problem. In our approach, a rigid ligand molecule (i.e. drug) manipulated by the user is inserted into the cavities of a rigid protein molecule to search for the binding cavity while the molecular interaction forces are conveyed to the user via a haptic device for guidance. We developed a new visualization concept, Active Haptic Workspace (AHW), for the efficient exploration of the large protein surface in high resolution using a haptic device having a small workspace. After the discovery of the true binding site and the rough alignment of the ligand molecule inside the cavity by the user, its final configuration is calculated off-line through time- stepping molecular dynamics (MD) simulations. At each time step, the optimum rigid-body transformations of the ligand molecule are calculated using a new approach, which minimizes the distance error between the previous rigid-body coordinates of its atoms and their new coordinates calculated by the MD simulations. The simulations are continued until the ligand molecule arrives to the lowest energy configuration. Our experimental studies conducted with six human subjects testing six different molecular complexes demonstrate that given a ligand molecule and five potential binding sites on a protein surface, the subjects can successfully identify the true binding site using visual and haptic cues. Moreover, they can roughly align the ligand molecule inside the binding cavity such that the final configuration of the ligand molecule can be determined via the proposed MD simulations.
[Show abstract][Hide abstract] ABSTRACT: Cagatay Basdogan 2 C o l l e g e o f E n g i n e e r i n g , K o c U n i v e r s i t y, I s ta n b u l , 3 4 4 5 0 C o l l e g e o f E n g i n e e r i n g , K o c U n i v e r s i t y, I s ta n b u l , 3 4 4 5 0 ABSTRACT In this paper, we present computationally efficient methods for visualization and simulation of molecular interactions in virtual environments with haptic feedback. In our simulations, the haptic device is used to guide a rigid ligand molecule into a receptor site while the molecular forces acting on the ligand molecule are scaled and reflected to the user in real-time. We demonstrate that the presence of a haptic interface accelerates the binding process and reduce the binding errors if it is used as a precursor to estimate the initial configuration of the ligand molecule at the binding site. After placement and rough alignment of the ligand molecule inside the binding cavity with the help of a haptic device, we use a novel computational approach to determine the final binding configuration of the ligand molecule. In this approach, the ligand molecule is treated as a rigid body seeking for the lowest potential configuration. The intermediate rigid body configurations of the ligand molecule is calculated in a least square sense using the new positions of its atoms moving under the influence of molecular interaction forces. The proposed approach is computationally more efficient than the molecular simulation methods that utilize the standard rigid-body formulations. We also present new methods for haptic visualization of a protein surface interactively to search for potential binding sites.