Thesis

Co-localisation des retours visuels et haptiques en Réalité Virtuelle

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

RésuméLa Réalité Virtuelle offre à un utilisateur la possibilité d’interagir avec des environnements très diversifiés.Elle possède ainsi un fort potentiel dans le domaine de la recherche, du médical ou encore del’industrie. La majorité des applications nécessite de manipuler des objets virtuels, un retour visuel ethaptique sont donc transmis à l’utilisateur au travers d’interfaces afin de l’aider à opérer. Dans l’étatactuel des développements, il subsiste néanmoins une incohérence dans le couplage des deux interfacesqui affecte potentiellement la qualité de l’interaction. En effet, le retour visuel est généralement affiché enface de l’utilisateur alors que le retour haptique est transmis au niveau de sa main. La configuration ditedélocalisée est simple à implémenter mais implique une incongruence spatiale entre les deux stimuli. Al’inverse, une configuration dans laquelle les retours visuels et haptiques sont co-localisés est plus réalistemais plus complexe à mettre en place.La thèse traite des différences engendrées par les deux configurations afin de mieux comprendre lesmécanismes impliqués. Elle est divisée en trois parties :• La conception d’une interface co-localisée afin de servir comme outil d’étude.• La comparaison des performances utilisateurs au cours d’opérations élémentaires.• La comparaison de l’intégration des stimuli.Une interface de bureau dans laquelle les retours visuels et haptiques sont co-localisés est développée. A ceteffet, une nouvelle architecture de dispositif haptique à câbles est proposée et intégrée dans un systèmed’affichage écran/miroir. Le mécanisme est redondant et une stratégie de commande par variation deraideur des câbles est donc avancée et testée afin d’améliorer la transparence du dispositif.Une expérience utilisateur est par la suite menée dans l’interface et l’efficacité de 36 participants à opéreren configurations co-localisée et délocalisées est ainsi évaluée. Une exécution plus rapide, plus précise etune plus grande stabilité dans les efforts de contact est démontrée en configuration co-localisée. De plus,la différence de stabilité est observée selon le type de délocalisation.Afin de mieux caractériser les différences entres les configurations, une expérience utilisateur est égalementréalisée sur l’intégration des stimuli. Les poids du retour visuel, proprioceptif et tactile dans le processusde perception sont relevés en configurations co-localisée

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
We describe a simple statistical framework intended as a model of how depth estimates derived from consistent depth cues are combined in biological vision. We assume that the rule of combination is linear, and that the weights assigned to estimates in the linear combination are variable. These weight values corresponding to different depth cues are determined by ancillary measures, information concerning the likely validity of different depth cues in a particular scene. The parameters of the framework may be estimated psychophysically by procedures described. The conditions under which the framework may be regarded as normative are discussed.
Conference Paper
Full-text available
This paper describes the CAVE (CAVE Automatic Virtual Environment) virtual reality/scientific visualization system in detail and demonstrates that projection technology applied to virtual-reality goals achieves a system that matches the quality of workstation screens in terms of resolution, color, and flicker-free stereo. In addition, this format helps reduce the effect of common tracking and system latency errors. The off-axis perspective projection techniques we use are shown to be simple and straightforward. Our techniques for doing multi-screen stereo vision are enumerated, and design barriers, past and current, are described. Advantages and disadvantages of the projection paradigm are discussed, with an analysis of the effect of tracking noise and delay on the user. Successive refinement, a necessary tool for scientific visualization, is developed in the virtual reality context. The use of the CAVE as a one-to-many presentation device at SIGGRAPH '92 and Supercomputing '92 for computational science data is also mentioned.
Article
Full-text available
This paper describes an experiment to assess the anxiety responses of people giving 5 min. presentations to virtual audiences consisting of eight male avatars. There were three different types of audience behavior: an emotionally neutral audience that remained static throughout the talk, a positive audience that exhibited friendly and appreciative behavior towards the speaker, and a negative audience that exhibited hostile and bored expressions throughout the talk. A second factor was immersion: half of the forty subjects experienced the virtual seminar room through a head-tracked, head-mounted display and the remainder on a desktop system. Responses were measured using the standard Personal Report of Confidence as a Public Speaker (PRCS), which was elicited prior to the experiment and after each talk. Several other standard psychological measures such as SCL-90-R (for screening for psychological disorder), the SAD, and the FNE were also measured prior to the experiment. Other response variables included subjectively assessed somaticization and a subject self-rating scale on performance during the talk. The subjects gave the talk twice each to a different audience, but in the analysis only the results of the first talk are presented, thus making this a between-groups design. The results show that post-talk PRCS is significantly and positively correlated to PRCS measured prior to the experiment in the case only of the positive and static audiences. For the negative audience, prior PRCS was not a predictor of post-PRCS, which was higher than for the other two audiences and constant. The negative audience clearly provoked an anxiety response irrespective of the normal level of public speaking confidence of the subject. The somatic response also showed a higher level of anxiety for the negative audience than for the other two, but self-rating was generally higher only for the static audience, each of these results taking into account prior PRCS.
Article
Full-text available
At the EPFL, we have developed a force-feedback device and control architecture for high-end research and industrial applications. The Delta Haptic Device (DHD) consists of a 6 degrees-of-freedom (DOF) mecatronic device driven by a PC. Several experiments have been carried out in the fields of manipulation and simulation to assess the dramatic improvement haptic information brings to manipulation. This system is particularly well suited for scaled manipulation such as micro-, nano- and biomanipulation. Not only can it perform geometric and force scaling, but it can also include fairly complex physical models into the control loop to assist manipulation and enhance human understanding of the environment. To demonstrate this ability, we are currently interfacing our DHD with an atomic force microscope (AFM). In a first stage, we will be able to feel in real-time the topology of a given sample while visualizing it in 3D. The aim of the project is to make manipulation of carbon nanotubes possible by including physical models of such nanotubes behavior into the control loop, thus allowing humans to control complex structures. In this paper, we give a brief description of our device and present preliminary results of its interfacing with the AFM.
Article
Full-text available
When integrating signals from vision and haptics the brain must solve a "correspondence problem" so that it only combines information referring to the same object. An invariant spatial rule could be used when grasping with the hand: here the two signals should only be integrated when the estimate of hand and object position coincide. Tools complicate this relationship, however, because visual information about the object, and the location of the hand, are separated spatially. We show that when a simple tool is used to estimate size, the brain integrates visual and haptic information in a near-optimal fashion, even with a large spatial offset between the signals. Moreover, we show that an offset between the tool-tip and the object results in similar reductions in cross-modal integration as when the felt and seen positions of an object are offset in normal grasping. This suggests that during tool use the haptic signal is treated as coming from the tool-tip, not the hand. The brain therefore appears to combine visual and haptic information, not based on the spatial proximity of sensory stimuli, but based on the proximity of the distal causes of stimuli, taking into account the dynamics and geometry of tools.
Article
Full-text available
Subjects attempted to recognize simple line drawings of common objects using either touch or vision. In the touch condition, subjects explored raised line drawings using the distal pad of the index finger or the distal pads both of the index and of the middle fingers. In the visual condition, a computer-driven display was used to simulate tactual exploration. By moving an electronic pen over a digitizing tablet, the subject could explore a line drawing stored in memory; on the display screen a portion of the drawing appeared to move behind a stationary aperture, in concert with the movement of the pen. This aperture was varied in width, thus simulating the use of one or two fingers. In terms of average recognition accuracy and average response latency, recognition performance was virtually the same in the one-finger touch condition and the simulated one-finger vision condition. Visual recognition performance improved considerably when the visual field size was doubled (simulating two fingers), but tactual performance showed little improvement, suggesting that the effective tactual field of view for this task is approximately equal to one finger pad. This latter result agrees with other reports in the literature indicating that integration of two-dimensional pattern information extending over multiple fingers on the same hand is quite poor. The near equivalence of tactual picture perception and narrow-field vision suggests that the difficulties of tactual picture recognition must be largely due to the narrowness of the effective field of view.
Article
Full-text available
This study explored whether virtual reality (VR) exposure therapy was effective in the treatment of spider phobia. We compared a treatment condition vs. a waiting list condition in a between group design with 23 participants. Participants in the VR treatment group received an average of four one-hour exposure therapy sessions. VR exposure was effective in treating spider phobia compared to a control condition as measured with a Fear of Spiders questionnaire, a Behavioural Avoidance Test (BAT), and severity ratings made by the clinician and an independent assessor. Eighty-three percent of patients in the VR treatment group showed clinically significant improvement compared with 0% in the waiting list group, and no patients dropped out. This study shows that VR exposure can be effective in the treatment of phobias.
Article
Full-text available
How does the visual system combine information from different depth cues to estimate three-dimensional scene parameters? We tested a maximum-likelihood estimation (MLE) model of cue combination for perspective (texture) and binocular disparity cues to surface slant. By factoring the reliability of each cue into the combination process, MLE provides more reliable estimates of slant than would be available from either cue alone. We measured the reliability of each cue in isolation across a range of slants and distances using a slant-discrimination task. The reliability of the texture cue increases as |slant| increases and does not change with distance. The reliability of the disparity cue decreases as distance increases and varies with slant in a way that also depends on viewing distance. The trends in the single-cue data can be understood in terms of the information available in the retinal images and issues related to solving the binocular correspondence problem. To test the MLE model, we measured perceived slant of two-cue stimuli when disparity and texture were in conflict and the reliability of slant estimation when both cues were available. Results from the two-cue study indicate, consistent with the MLE model, that observers weight each cue according to its relative reliability: Disparity weight decreased as distance and |slant| increased. We also observed the expected improvement in slant estimation when both cues were available. With few discrepancies, our data indicate that observers combine cues in a statistically optimal fashion and thereby reduce the variance of slant estimates below that which could be achieved from either cue alone. These results are consistent with other studies that quantitatively examined the MLE model of cue combination. Thus, there is a growing empirical consensus that MLE provides a good quantitative account of cue combination and that sensory information is used in a manner that maximizes the precision of perceptual estimates.
Article
Full-text available
The article addresses the technical principles of a new high-performance haptic device, called HapticMaster, and gives an overview of its haptic performance. The HapticMaster utilizes the admittance control paradigm, which facilitates high stiffness, large forces, and a high force sensitivity. On top of that the HapticMaster has a large workspace, and a huge haptic resolution. Typical applications for the HapticMaster are therefore found in virtual reality, haptics research, and rehabilitation.
Article
Full-text available
Mark Raymond Mine Exploiting Proprioception in Virtual-Environment Interaction (Under the direction of Frederick P. Brooks, Jr.) Manipulation in immersive virtual environments is difficult partly because users must do without the haptic contact with real objects they rely on in the real world to orient themselves and the objects they are manipulating. To compensate for this lack, I propose exploiting the one real object every user has in a virtual environment, his body. I present a unified framework for virtual-environment interaction based on proprioception, a person's sense of the position and orientation of his body and limbs. I describe three forms of body-relative interaction: . Direct manipulation---ways to use body sense to help control manipulation . Physical mnemonics---ways to store/recall information relative to the body . Gestural actions---ways to use body-relative actions to issue commands Automatic scaling is a way to bring objects instantly within reach so that users can manipulate them using proprioceptive cues. Several novel virtual interaction techniques based upon automatic scaling and our proposed framework of proprioception allow a user to interact with a virtual world intuitively, efficiently, precisely, and lazily. Two formal user studies evaluate key aspects of body-relative interaction. The virtual docking study compares the manipulation of objects co-located with one's hand and the manipulation of objects at a distance. The widget interaction experiment explores the differences between interacting with a widget held in one's hand and interacting with a widget floating in space. Lessons learned from the integration of body-relative techniques into a real-world system, the Chapel Hill Immersive Modeling Program (CHIMP), are presented and discus...
Article
Full-text available
Most research on 3D user interfaces aims at providing only a single sensory modality. One challenge is to integrate several sensory modalities into a seamless system while preserving each modality's immersion and performance factors. This paper concerns manipulation tasks and proposes a visuo-haptic system integrating immersive visualization, tactile force and tactile feedback with co-location. An industrial application is presented.
Article
PART I: POWERS 1. What I Say Goes PART II: PROPHECIES 2. Earth, Breath, Frenzy: The Delphic Oracle 3. Origen, Eustathius, and The Witch of Endor PART III: POSSESSIONS 4. Hoc Est Corpus 5. The Exorcism of John Darrell 6. O, that Oh is the Devill: Glover and Harsnett PART IV: PRODIGIES 7. Miracles and the Encyclopedie 8. Speaking Parts: Diderot and Les Bijoux indiscrets 9. The Abbe and the Ventriloque 10. The Dictate of Phrenzy: Charles Brockden Brown's Wieland PART V: POLYPHONICS 11. Ubiquitarical 12. At Home and Abroad: Monsieur Alexandre and Mr Matthews 13. Phenomena in the Philosophy of Sound: Mr Love 14. Writing the Voice PART VI: PROSTHETICS 15. Vocal Reinforcement 16. Talking Heads, Automaton Ears 17. A Gramophone in Every Grave PART VII: NO TIME LIKE THE PRESENT 18. No Time Like the Present Works Cited Index
Book
This is the first book to introduce the new statistics—effect sizes, confidence intervals, and meta-analysis-in an accessible way. It is chock full of practical examples and tips on how to analyze and report research results using these techniques. The book is invaluable to readers interested in meeting the new APA Publication Manual guidelines by adopting the new statistics—which are more informative than null hypothesis significance testing, and becoming widely used in many disciplines. This highly accessible book is intended as the core text for any course that emphasizes the new statistics, or as a supplementary text for graduate and/or advanced undergraduate courses in statistics and research methods in departments of psychology, education, human development, nursing, and natural, social, and life sciences. Researchers and practitioners interested in understanding the new statistics, and future published research, will also appreciate this book. A basic familiarity with introductory statistics
Article
Examined how successfully minority students' learning styles could be matched with computer instruction and studied the effects of matching on Ss' achievement, reflectivity, and self-esteem. 16 Black and 20 White 1st graders were randomly assigned to LOGO instruction, computer assisted instruction (CAI), or a no exposure control group. Pairs of Ss received teaching from White instructors for 10 wks. Results indicate that Black LOGO-instructed Ss outscored White LOGO-instructed Ss on a standardized mathematics test. Black CAI Ss scored lower on a measure of self-reflectivity than Black or White Ss in the other conditions. Findings suggest that learning style is mediated through social and cultural contexts. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Increasingly we are coming to understand that left-handedness has social, educational, and psychological implications and affects many aspects of health, well-being, and even life span. This book focuses on all that distinguishes right- and left-handers. It demonstrates that handedness is only one part of sidedness, which also includes footedness, eyedness, and earedness, and shows readers how to measure their own sidedness. The book answers some common questions such as: Where does handedness come from? Is it coded in the genes? Does it stem from social pressure? Might it indicate some damage or injury? Is it related to the organization of the brain, and how? Further, the book examines the differences between left- and right-handers in terms of intelligence, personality, creativity, and a number of other domains. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Ss were confronted with a situation which mimicked the visuomotor consequences of an 11-deg lateral displacement of the visual field (leftward in Experiment I and rightward in Experiment II). The displacement was effected by having E place his own finger to one side of S’s nonvisible finger. Ss who were informed of this deception prior to the exposure period (informed group) manifested significantly less adaptation (“negative aftereffect” and “proprioceptive shift”) than did Ss who were told that their vision would be displaced by the goggles which they were wearing (misinformed group). It was concluded that adaptation to visual rearrangement is strongly influenced by S’s assumptions regarding the adequacy of his vision and the identity of the manual limb which he is viewing.
Article
A general formula (α) of which a special case is the Kuder-Richardson coefficient of equivalence is shown to be the mean of all split-half coefficients resulting from different splittings of a test. α is therefore an estimate of the correlation between two random samples of items from a universe of items like those in the test. α is found to be an appropriate index of equivalence and, except for very short tests, of the first-factor concentration in the test. Tests divisible into distinct subtests should be so divided before using the formula. The index [`(r)]ij\bar r_{ij} , derived from α, is shown to be an index of inter-item homogeneity. Comparison is made to the Guttman and Loevinger approaches. Parallel split coefficients are shown to be unnecessary for tests of common types. In designing tests, maximum interpretability of scores is obtained by increasing the first-factor concentration in any separately-scored subtest and avoiding substantial group-factor clusters within a subtest. Scalability is not a requisite.
Article
Virtual reality (VR) has been gaining popularity in surgical planning and simulation. Most VR surgical simulation systems provide haptic (pertinent to the sense of touch) and visual information simultaneously using certain alignments between a haptic device and visual display. A critical aspect of such VR surgical systems is to represent both haptic and visual information accurately to avoid perceptual illusions (e.g., to distinguish the softness of organs/tissues). This study compared three different alignments (same-location alignment, vertical alignment, and horizontal alignment) between a haptic device and visual display that are widely used in VR systems. We conducted three experiments to study the influence of each alignment on the perception of object softness. In each experiment, we tested 15 different human subjects with varying availability of haptic and visual information. During each trial, the task of the subject was to discriminate object softness between two deformable balls in different viewing angles. We analyzed the following dependent measurements: subject perception of object softness and objective measurements of maximum force and maximum pressing depth. The analysis results reveal that all three alignments (independent variables) have similar effect on subjective perception of object softness within the interval of viewing angles from -7.5° to +7.5°. The viewing angle does not affect objective measurements. The same-location alignment requires less physical effort compared with the other two alignments. These observations have implications in creating accurate simulation and interaction for VR surgical systems.
Conference Paper
This paper proposes a new approach for adding haptic feedback on a workbench. The integration of a stringed haptic device is discussed.
Article
The CAVE is a new virtual reality interface. In its abstract design, it consists of a room whose walls, ceiling and floor surround a viewer with projected images. Its design overcomes many of the problems encountered by other virtual reality systems and can be constructed from currently available technology. Suspension of disbelief and viewer-centered perspective, are often used to describe such systems. The main topics are head-mounted display (HMD), binocular omni-oriented monitor (BOOM), and audio-visual experience automatic virtual environment (CAVE). The article concludes that the CAVE is a nonintrusive easy-to-learn high-resolution virtual reality interface. It is superior to other virtual reality paradigms across many issues, particularly in field-of-view, visual acuity and lack of intrusion. Moreover, it is open to limited use for collaborative visualization.
Article
We present a physically-based approach to grasping and manipulation of virtual objects that produces visually realistic results, addresses the problem of visual inter-penetration of hand and object models, and performs force rendering for force-feedback gloves, in a single framework. Our approach couples a simulation-controlled articulated hand model to tracked hand configuration using a system of linear and torsional virtual spring-dampers. We discuss an implementation of our approach that uses a widely available simulation tool for collision detection and response. We pictorially illustrate the resulting behavior of the virtual hand model and of grasped objects, discuss user behavior and difficulties encountered, and show that the simulation rate is sufficient for control of current force-feedback glove designs. We also present a prototype system for natural whole-hand interactions in a desktop-sized workspace.
Article
A real-time augmented reality (AR) user interface for nanoscale interaction and manipulation applications using an atomic force microscope (AFM) is presented. Nanoscale three-dimensional (3-D) topography and force information sensed by an AFM probe are fed back to a user through a simulated AR system. The sample surface is modeled with a B-spline-based geometry model, upon which a collision detection algorithm determines whether and how the spherical AFM tip penetrates the surface. Based on these results, the induced surface deformations are simulated using continuum micro/nanoforce and Maugis-Dugdale elastic contact mechanics models, and 3-D decoupled force feedback information is obtained in real time. The simulated information is then blended in real time with the force measurements of the AFM in an AR human machine interface, comprising a computer graphics environment and a haptic interface. Accuracy, usability, and reliability of the proposed AR user interface is tested by experiments for three tasks: positioning the AFM probe tip close to a surface, just in contact with a surface, or below a surface by elastically indenting. Results of these tests showed the performance of the proposed user interface. This user interface would be critical for many nanorobotic applications in biotechnology, nanodevice prototyping, and nanotechnology education
Article
Could weight, temperature, and texture combine to bring simulated objects to life? Describing cutting-edge technology that will influence the way we interact with computers for years to come, this pioneering book answers yes: not only is it possible, but devices capable of providing force and tactile sensory feedback already exist. Force and Touch Feedback for Virtual Reality is the first comprehensive source of information on the design, modeling, and applications of force and tactile interfaces for VR. It is a must have for scientists, engineers, psychologists, and developers involved in VR, and for anyone who would like to gain a deeper understanding of this exciting and fast-growing field. Complete with hundreds of tables, figures, and color illustrations, Force and Touch Feedback for Virtual Reality offers * Basic information on human tactile sensing and control and feedback actuator technology * A worldwide survey of force and tactile interface devices, from the simple joystick to full-body instrumented suits based on human factor tests * Step-by-step instructions for realistic physical modeling of virtual object characteristics such as weight, surface smoothness, compliance, and temperature * A unified treatment of the benefits of the new haptic interface technology for simulation and training based on human factor tests * A detailed analysis of optimum control requirements for force and tactile feedback devices * A review of emerging applications in areas ranging from surgical training and entertainment to telerobotics and the military
Article
A project to develop a haptic and display for 6-D force fields of interacting protein molecules was began in 1967. The authors approached it in four stages: a 2-D system, a 3-D system tested with a simple task, a 6-D system tested with a simple task, and a full 6-D molecular docking system, which was the initial goal. This paper summarize the entire project, the four systems, the evaluation experiments, the results, and the authors observations
Article
1 The literature relating to the effects of benzodiazepines on psychomotor performance is critically reviewed. 2 The multiple and diverse psychomotor tests used are assessed according to their ability to demonstrate differences between drugs. 3 Three general conclusions are: (1) The speed with which simple acts of a repetitive nature are performed may be impaired by benzodiazepines. (2) learning and immediate memory will also be impaired. (3) there is relatively little indication that well established higher mental faculties are adversely involved.
Article
Separate groups of people estimated the sizes of perceived or of remembered objects. In three independent experiments, both sets of data were well fit by power functions, and the exponent was reliably smaller by remembered than for perceived size.
Article
As research on sensation and perception has grown more sophisticated during the last century, new adaptive methodologies have been developed to increase efficiency and reliability of measurement. An experimental procedure is said to be adaptive if the physical characteristics of the stimuli on each trial are determined by the stimuli and responses that occurred in the previous trial or sequence of trials. In this paper, the general development of adaptive procedures is described, and three commonly used methods are reviewed. Typically, a threshold value is measured using these methods, and, in some cases, other characteristics of the psychometric function underlying perceptual performance, such as slope, may be developed. Results of simulations and experiments with human subjects are reviewed to evaluate the utility of these adaptive procedures and the special circumstances under which one might be superior to another.
Article
When a person looks at an object while exploring it with their hand, vision and touch both provide information for estimating the properties of the object. Vision frequently dominates the integrated visual-haptic percept, for example when judging size, shape or position, but in some circumstances the percept is clearly affected by haptics. Here we propose that a general principle, which minimizes variance in the final estimate, determines the degree to which vision or haptics dominates. This principle is realized by using maximum-likelihood estimation to combine the inputs. To investigate cue combination quantitatively, we first measured the variances associated with visual and haptic estimation of height. We then used these measurements to construct a maximum-likelihood integrator. This model behaved very similarly to humans in a visual-haptic task. Thus, the nervous system seems to combine visual and haptic information in a fashion that is similar to a maximum-likelihood integrator. Visual dominance occurs when the variance associated with visual estimation is lower than that associated with haptic estimation.
Article
In this study, we evaluated observers' ability to compare naturally shaped three-dimensional (3-D) objects, using their senses of vision and touch. In one experiment, the observers haptically manipulated 1 object and then indicated which of 12 visible objects possessed the same shape. In the second experiment, pairs of objects were presented, and the observers indicated whether their 3-D shape was the same or different. The 2 objects were presented either unimodally (vision-vision or haptic-haptic) or cross-modally (vision-haptic or haptic-vision). In both experiments, the observers were able to compare 3-D shape across modalities with reasonably high levels of accuracy. In Experiment 1, for example, the observers' matching performance rose to 72% correct (chance performance was 8.3%) after five experimental sessions. In Experiment 2, small (but significant) differences in performance were obtained between the unimodal vision-vision condition and the two cross-modal conditions. Taken together, the results suggest that vision and touch have functionally overlapping, but not necessarily equivalent, representations of 3-D shape.
Conference Paper
We present an exoskeletal haptic device called SKK Hand Master. The device is semi-directly driven by linkages with ultrasonic motors, which has characteristic features close to cybernetic actuators. In the control of the device, we propose a method of measuring the joint positions and joint torques of the finger and a control method called PWM/PS is presented to overcome the intrinsic disadvantages of ultrasonic motors such as hysteresis. Construction of the device is addressed and several results of experiments for performance evaluation are included
Conference Paper
The bandwidth and response characteristics required for effective telerobotic control are investigated. The definition of these requirements is based on surveying existing literature on teleoperation, robotics, and human factors to determine if sufficient knowledge in this area is available to adequately specify appropriate requirements. An idealized design goal based on the concept of telepresence is presented. To further clarify the response requirements, the control problem is divided into the two major components of teleoperation: the human operator and the telerobot. The literature survey showed that humans have asymmetrical input/output capabilities. It was also found that a human does not have one single relevant bandwidth, but rather that bandwidth is a function of the human's mode of operation. A survey was conducted in which 12 experts were queried to find a consensus based on experience. These results are presented and recommendations are derived
Vision-based haptic feedback for remote micromanipulation in-sem environment
  • W D Berry
, 1993] Berry, W. D. (1993). Understanding regression assumptions, volume 92. Sage Publications. [Bülthoff, 2017] Bülthoff (2017). Cablerobotsimulator. [Bolopion et al., 2012] Bolopion, A., Dahmen, C., Stolle, C., Haliyo, S., Régnier, S., and Fatikow, S. (2012). Vision-based haptic feedback for remote micromanipulation in-sem environment. International Journal of Optomechatronics, 6(3) :236-252.
A null-space-based control for cable driven manipulators
  • Cattin
[Cattin et al., 2012] Cattin, D., Sariyildiz, E., and Ohnishi, K. (2012). A null-space-based control for cable driven manipulators. In IECON 2012-38th Annual Conference on IEEE Industrial Electronics Society, pages 2132-2137. IEEE. [Chaillet and Régnier, 2013] Chaillet, N. and Régnier, S. (2013). Microrobotics for micromanipulation. John Wiley & Sons.
Kinematic crosscorrelation induces sensory integration across separate objects
et al., 2017] Debats, N. B., Ernst, M. O., and Heuer, H. (2017). Kinematic crosscorrelation induces sensory integration across separate objects. European Journal of Neuroscience, 46(12) :2826-2834.
Virtual reality and haptics for nanorobotics
  • M J Farah
  • A Ferreira
  • C Mavroidis
[Farah, 2000] Farah, M. J. (2000). The cognitive neuroscience of vision. Blackwell Publishing. [Ferreira and Mavroidis, 2006] Ferreira, A. and Mavroidis, C. (2006). Virtual reality and haptics for nanorobotics. IEEE Robotics & Automation Magazine, 13(3) :78-92.
Directtouch vs. mouse input for tabletop displays
  • Forlines
[Forlines et al., 2007] Forlines, C., Wigdor, D., Shen, C., and Balakrishnan, R. (2007). Directtouch vs. mouse input for tabletop displays. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 647-656. ACM. [Fox, 1991] Fox, J. (1991). Regression diagnostics : An introduction, volume 79. Sage. [Freeman, 1981] Freeman, W. J. (1981). A physiological hypothesis of perception. Perspectives in biology and medicine, 24(4) :561-592.
Contributions au développement d'une interface haptique à contacts intermittents
  • F Gonzalez
  • I E Gordon
  • V Morison
[Gonzalez, 2015] Gonzalez, F. (2015). Contributions au développement d'une interface haptique à contacts intermittents. PhD thesis, Université Pierre et Marie Curie-Paris VI. [Gordon and Morison, 1982] Gordon, I. E. and Morison, V. (1982). The haptic perception of curvature. Perception & psychophysics, 31(5) :446-450.
The responsive workbench : A virtual work environment
et al., 1995] Kruger, W., Bohn, C.-A., Frohlich, B., Schuth, H., Strauss, W., and Wesche, G. (1995). The responsive workbench : A virtual work environment. Computer, 28(7) :42-48.
Le dessin d'architecte : simulation graphique et réduction d'incertitude
  • J.-C Lebahar
[Lebahar, 1983] Lebahar, J.-C. (1983). Le dessin d'architecte : simulation graphique et réduction d'incertitude, volume 1. Editions Parenthèses.
Evaluating visual/motor co-location in fish-tank virtual reality
et al., 2009] Teather, R. J., Allison, R. S., and Stuerzlinger, W. (2009). Evaluating visual/motor co-location in fish-tank virtual reality. In Science and Technology for Humanity (TIC-STH), 2009 IEEE Toronto International Conference, pages 624-629. IEEE. [Tsai et al., 2001] Tsai, M.-D., Hsieh, M.-S., and Jou, S.-B. (2001). Virtual reality orthopedic surgery simulator. Computers in biology and medicine, 31(5) :333-351.
Local surface orientation dominates haptic curvature discrimination
et al., 2009] Wijntjes, M. W., Sato, A., Hayward, V., and Kappers, A. M. (2009). Local surface orientation dominates haptic curvature discrimination. IEEE transactions on haptics, 2(2) :94-102.
Treatment of flying phobia using virtual reality : data from a 1-year follow-up using a multiple baseline design
  • C W Borst
  • A P Indugula
  • Botella
and Indugula, 2006] Borst, C. W. and Indugula, A. P. (2006). A spring model for wholehand virtual grasping. Presence : Teleoperators & Virtual Environments, 15(1) :47-61. [Botella et al., 2004] Botella, C., Osma, J., Garcia-Palacios, A., Quero, S., and Baños, R. (2004). Treatment of flying phobia using virtual reality : data from a 1-year follow-up using a multiple baseline design. Clinical Psychology & Psychotherapy : An International Journal of Theory & Practice, 11(5) :311-323.
The rutgers master ii-new design force-feedback glove
et al., 2002] Bouzit, M., Burdea, G., Popescu, G., and Boian, R. (2002). The rutgers master ii-new design force-feedback glove. IEEE/ASME Transactions on mechatronics, 7(2) :256-263.
Microrobotics for micromanipulation
  • N Chaillet
  • S Régnier
and Régnier, 2013] Chaillet, N. and Régnier, S. (2013). Microrobotics for micromanipulation. John Wiley & Sons.
The influence of spatial delocation on perceptual integration of vision and touch
  • Colgate
Colgate et al., 1995] Colgate, J. E., Stanley, M. C., and Brown, J. M. (1995). Issues in the haptic display of tool use. In Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots, volume 3, pages 140-145. IEEE. [Congedo et al., 2006] Congedo, M., Lécuyer, A., and Gentaz, E. (2006). The influence of spatial delocation on perceptual integration of vision and touch. Presence : Teleoperators and Virtual Environments, 15(3) :353-357.
Couplage haptique pour des applications de docking moléculaire
  • B Daunay
, 2007] Daunay, B. (2007). Couplage haptique pour des applications de docking moléculaire. PhD thesis, Paris 6.
Fairy lights in femtoseconds : aerial and volumetric graphics rendered by focused femtosecond laser combined with computational holographic fields
et al., 2016] Ochiai, Y., Kumagai, K., Hoshi, T., Rekimoto, J., Hasegawa, S., and Hayasaki, Y. (2016). Fairy lights in femtoseconds : aerial and volumetric graphics rendered by focused femtosecond laser combined with computational holographic fields. ACM Transactions on Graphics (TOG), 35(2) :17.
Visual interpretation of hand gestures for human-computer interaction : A review
et al., 1997] Pavlovic, V. I., Sharma, R., and Huang, T. S. (1997). Visual interpretation of hand gestures for human-computer interaction : A review. IEEE Transactions on Pattern Analysis & Machine Intelligence, -(7) :677-695.
Similar mechanisms underlie curvature comparison by static and dynamic touch
et al., 1999] Pont, S. C., Kappers, A. M., and Koenderink, J. J. (1999). Similar mechanisms underlie curvature comparison by static and dynamic touch. Perception & Psychophysics, 61(5) :874-894.