Questions related to Haptics
I am a researcher in haptics and telepresence. In this regard, I am hoping to find Simulink-MATLAB model of any 3dof haptic device (phantom Omni, premium, Novint Falcon etc.). Can any one have or know someone who have used those models in their work. Alternatively, does anyone have OpenHaptics toolkit (of sensAble technologies) for running PhanTorque or PhanSim toolkit.
I have designed 16 different mid-air haptic icons, and in an identification study I had participants guess what metaphors from a list each of the icons represented. This gave a percentage of the the participants success in identifying the correct metaphors through the haptic icon. I now want to know if there is a correlation between the type of icons and the participants' identification scores. Icons can be classified on a continuum between representational and abstract, with semi-abstract lying in between. In order to classify my icon designs on this continuum, 3 raters gave each of the icons a score between 1 to 5 relating to the continuum (i.e. 1 = abstract, 3 = semi-abstract, 5 = representational).
My question twofold:
- Can I accept the mode rating between the 3 raters as "true" (percentage agreement 66%) or do I need to find consensus agreement by revisiting the definitions with the raters?
- Also to take chance into account, how should I calculate the Kappa value (in SPSS) and what value would be acceptable?
This is my first time posting to Research Gate, I often look through posts for similar research questions I have of my own. I am in a Research Methods grad course and I am suppose to create a statistical hypothesis and choose a testing method for my study. I have developed this problem statement and research questions:
The problem to be addressed is, how to effectively deliver take-over-requests (TOR) to get the quickest response time from a driver.
Overarching research questions to this problem statement is:
1) What type of take over request (TOR) is warrants the most efficient response time by driver?
2) How do response times between elderly drivers and young drivers differ?
3) Do elderly drivers respond better to certain stimuluses?
I want to test multiple TOR methods such as: Visual/audio, audio/haptic, audio, haptic and visual against two categorical age groups (young and old drivers), with my dependent variable being time.
Does a Two-way ANOVA sound appropriate for this? I've asked two different professors, one gave me a cryptic, you can't do that, type answer...and another one gave me an answer that i'm still uncertain of. Please let me know if i'm headed in the correct direction with this of if there is a better way.
I am currently studying the perception of audio-tactile envelope asynchronous perception.
For this research, it is important for me to know the audio and tactile just noticeable difference for amplitude decrease (volume drop). But, I could not find any literature talking about this specific subject.
The most related research I could find are several studies on the audio-tactile gap discrimination, oriented to measure our perception resolution :
Why there aren't any studies that measure the amplitude decrease JNDs ?
Or the Equal-loudness countours for audio and haptics can be used to infer the amplitude decrease JNDs ?
But when i just increased the controller gains by a fold of 100 the system was seem to be controlled. The system was measured with a difference of (0.002 milliseconds)
I have a phantom_omni haptic and want to render a deformable shape for this i define a plane of vertexes and when curser collide to shape, vertexes near the collision point moved down and it more or less deform. but the question is , this method is too slow and force feed back and rendering frequency come down , so what method do you suggest me to solve this problem?
I am currently working on my Masterthesis and i am about to make some tests about vibration. I have searched many biological and psycho-physical Articles. But i didn't find any data when it comes to vibration thresholds. I know they are hard to measure, but is there any data, in order to have a guideline?
There are many technological challenges in Haptics research. Among them, what is the most important grand ch technical challenge in Haptics research.
The common factor of the three communications is the ultra-low latency requirement, but there are different definitions for them according to their applications. So, can we use the real-time communications as a general name for the them or not?
Recently, New Media & Society (http://journals.sagepub.com/toc/nmsa/19/10) has published a special edition on haptic media. One of the articles deals with materialisation of data and the possibility of printing 3D printers.
There are too many haptic descriptors (which depends on the product). The idea is to have a set of generic descriptors usable for any product...
Haptic feeling is measured by acceleration. However, how the Human being take the acceleration? I am wondering how long period effective acceleration really matters. Thank you!
I have almost completed studying through your dissertation, and would like to ask a few questions, if that is okay? In the sections about Tactos, you describe participants achieving better (more accurate) results when they "live" the zoom scale. In your research since then, do you continue to see that living the mutual scale results in greater accuracy?
I ask because lately I am noting that the mind seems to estimate best when it feels “self-relevancy” to the volume of space it is trying to estimate (when we project ourselves as a part of the predicted effects). Do you think this might be true? If so, do you think that it is important to "immerse" a user in the interface paradigm rather than just form an abstract connection?
In therapeutic haptic devices where a controller guards the repetitions of the patient, it happens that the patient does not learn and rehabiliate properly because the task is executed by the controller finally, which is known as the slackness problem. What would be the correct stability concept to consider in controllers such that the patient is still challenged ??
Is there any special criteria to evaluate soft tissue models implemented using mass spring system when the user movements is given from the haptic device.
We know that laparoscopic systems with haptic feedback have more advantage than something else. so, Do they have mass production? In which countries?
We are pleased to release the "haptic based guidance database" that consists of data from 25 naive pairs of human participants in human-human guiding demonstrations using a hard rein, and 10 naive participants in a robotic guidance scenario to test the control policies identified from human-human demonstrations.
I'm interested in the performance of psychometric procedures to assess perception properties (haptics in my case). I do this with Monte Carlo simulations based on psychometric functions.
Does anybody know a reference how the psychometric function of an experienced observer differs from the psychometric function of naive, i.e. inexperienced observers?
Hints are very welcome, best regards, ch
Has there been any works on Haptic codes in architecture? Can there be a concept of Haptic form?How do we haptically percieve environment and architecture and how do we code it into aesthetics?
I try to compare data about haptic perception with physical data about human tissues. I want to understand how the tissue thickness' variability could decrease reliability of osteopathic palpation. I found some values of muscle compliance and haptic perception in Howell's work about "virtual haptic back".
I want to model the user hand in a VR haptic system control. So to be more realistic and convenient, I try to consider different uncertainties caused by different users usage! But I am trying to find new models for this aim. Some new models are different from classical models like linear 2nd order models with uncertainty.
Does anyone know if there are any papers studying the effects on presence of Visuo-Haptic systems? That is, the mixture of virtual graphics and haptic devices that provide feedback for the virtual objects being interacted with.
Most papers I have been able to find focused on desktop-based system where the user is sitting and interacting with a haptic device. Is anyone aware of any similar study where tangible feedback or haptic devices are used in conjunction with "more traditional" completely virtual reality fully immersive environments?
Can others here share their knowledge of current research that pursues the use of illusion (MVF therapy as in Ramachandran, proprioceptive VR illusion as in Ehrsson/Blanke) to specifically target positive anticipation, and the growth of esteem and self-efficacy? This is the focus of my current research, to fashion anticipative relief between therapeutic (paraplegic) sessions. Any thoughts appreciated.
In visual and auditory (basic-perceptual) paradigms, degradation of stimuli is common. In vision, there are blurring or contrast change available; in audition there are, e.g., high and low pass filters. Does anyone know if there are paradigms available that allow to degrade haptic stimuli? In the sense that these degradations make the task harder or easier?
From the thermoreceptors to thermoTRP channels, the literature typically specify their responding range/characterastics in terms of temperature. I guess it is because the researchers typically used "temperature stimuli", that is, thermal stimuli that are specified in terms of temperature, as the experimental stimuli. However, I wonder whether the thermoreceptors or the thermosensory neurons really respond to "temperature" or if it is possible that they actually respond to the the "heat" that flow through them.
Consider what happens physically when a temperature change is applied to the skin surface. The thermoreceptors are located below the skin surface. When a temperature change is applied to the skin surface, the temperature difference between the skin surface and the skin layer at which the thermoreceptors located creates heat flow. Thus, for the thermoreceptors, they not only encounter a temperature change but also a newly created heat flow.
If we look at the neural responses, a sudden change in skin temperature, e.g. a temperature step, typically causes a transient overshoot in discharge frequency followed by adaptation to the new static frequency. Consider the heat transfer winthin the skin, the rate of temperature and thus the heat flow are maximum at the moment of stimulus application and reach a steady state value afterwards. So to me, the neural responses seem to correspond well with the the heat flow.
I wonder if researchers familiar with this topic can give their comments.
I would like to perform haptic tasks with blind individuals in a fMRI scanner. Are there any good tasks that are easily implemented in this setting?
Mounia, thank you for sharing the InGrid concept and references. What I found most meaningful was the wording here: “We believe that the embodied experience depends on the permanence (temporary or not) of the object in the embodied space and the changes that can bring within sensorimotor contingencies. This can be obtained by having a completely immerged user in the space of interaction. By analogy, interactive tabletops can be experienced as an extension of the body because not only the users are immerged in the sensorimotor space but also through the space of shared and private knowledge. The sensorimotor contingencies of interactive tabletops represent the space of actions and sensations that can be defined by extracting the sensorimotor invariants in both peripersonal and extrapersonal spaces” (p. 4).
Are you envisioning this technology for the early classroom? It seems it would be such a potential way to break through very young conceptual boundaries between where the functional self ends and the group-enabled self begins (in the Piagetian sense). Two things very much stand out: one is your mention of pericutaneous space (how the sense of self extends through the tools and interfaces we use to their boundaries and a bit beyond), and the other was body ownership (functional permanence and what one might call identity-separability). Can you affirm my guess that you see these facilitating mechanisms as a means to extend proprioception and subsequent efficacy, and that any break in reinforcing modalities (space, vision, tactile feedback, control locus) severs this illusion? If so I am much in agreement and this was brought home to me during my research on illusion therapy (please see Henrik Ehrsson’s research and Ramachandran’s synaethesia connection as well).
As individuals, we urgently need meaningful confirmations that everything is okay, that we can effect positive changes for ourselves and our surrounds like anticipation of personal growth and retained control. The self does not end at luminal sensation, but at estimates of personal reach. It seems to me, belief for us is not just “passive change-response" (one for one) but "hopeful, step-responses" similar to the metaphor of keeping our belief in the air like a balloon. It is not easy if we lose attention on our goal to sustain our belief, but it gets easier as we add "more hands in the air" in this attempt (additional modalities as you indicate – touch, sound, what you so aptly call immergence). For those that remember the movie (Somewhere in Time), like Chris Reeve’s seeing the penny from the future, it takes only one disconsonant proof to undo a stream of very hopeful, consonant affirmations – such that now confirmation frequency becomes confirmation urgency instead - to anxiously regain lost belief. For children and therapy, it is not difficult to see how helpful the InGrid and similar designs might be in bridging the peripersonal space of childhood to the socially-buttressed extended space needed to succeed in life (to contribute individually and see the collectively beneficial goal).
Ziat, M., Fancher, J., Kilpela, K, Fridstrom, J., & Clark, J. J. (2013, April-May). InGrid: Rethinking the Embodied Space. Paper to be presented at ACM SIGCHI Conference on Human Factors in Computing Systems, Paris, France.
Conference Paper InGrid: Rethinking the Embodied Space
Could you share some specifics on how this device will function? I am familiar with micromotor stimulation (there is a vest I believe with these mounted on the back for blind navigation) but I am curious about the subcutaneous stimulation (how it is achieved).
Presentation Haptic Hallucination Sleeve
Your article references the subjective experience of the passage of time, the kappa temporal/spatial dilation effect, and the user interface wait interval model (progress bar). Auditory models of this also seem to be extant and much of time perception study seems even to have originated with other modalities such as sound continuity/discontinuity sense (from Ch. XV, “Perception of Time” in James’ Principles). May I ask, are there haptic equivalents for either the kappa effect or the user progress bar visual? Do we see this subjective timescale appreciation in human touch as well?
Your article also states, “We believe that the perception of time depends on the nature of the stimulus (filled or empty) rather than on the speed of motion or on the distance covered by the stimulus” (Ziat & Saoud, 2013, p. 3-4). I too believe this to be very close to the perception of time - the concept of a timescale seems to originate from the human need to conceive semantic intervals within which we might discover meaning (something Chomsky termed elegantly as “discrete infinities” in Rieber, 2010). But do you think perhaps this need and this sense is not visuospatial or auditory since it can take on any modality? We sort of superpose our attentional frequency atop the presentational frequency of events in time – the latter does not change, but our perception of it feels dilated or constricted relative to our affective estimate of the interval’s relevancy and meaningfulness to us. Whether the gaps are filled or empty (continuous/discontinuous) seems less critical than how we perceive the interval (is it distracting from or deferring fulfillment, or is it aiding determination of our arrival at fulfillment). I have come to believe that our anticipation of undiscovered self-relevancy (which affects the rate of our focal frequency) within the temporal interval governs the apparent velocity of any transit of that interval – what do you think?
Conference Paper The Progress Bar as a Metaphor for Time Continuity and Discontinuity
“Ce qui manquerait le plus à cette nouvelle modalité perceptive est ce que les philosophes nomment qualia (Bach-y-Rita, 1996), c’est-à-dire les qualités des choses perçues. Malgré l’ensemble des possibilités permises par les dispositifs de substitution sensorielle, il leur est souvent reproché de ne procurer aucune émotion. Un aveugle, « regardant » sa femme grâce au TVSS, resta désappointée devant l’absence d’émotion ressentie” (Ziat, 2006, p. 64).
(“A blind man ‘looking’ at his wife thanks to the TVSS, remained disappointed at the absence of emotion he felt”).
I am so thankful you shared this in your dissertation Mounia. It has been a tenet and wish for me for several years now, to pursue cognitive psychology partially for the purpose of researching haptic transmission as a means to connect humans via touch. Your statement says so much Mounia: sight without touch is missing most of the meaning. We do not perceive to simply detect visual information about things, we perceive to confirm our hopes and feelings about what things might possess, relative to who and what we are to one another. Elaborating visual potential is often an attempt to sublimate our lost sense of touch.
The main tenet of haptic transmission is that current video/audio transmission doesn’t intrinsically contain or convey any emotion. Unlike touch, vision and sound are not exchanged - they are transduced and subsequently inferred (eyes don’t emit light and ears don’t talk). But only touch can mutually transmit feelings of warmth, adoration, fear, urgency (deep pressure) and affective identity. In the absence of the transmission of touch, we are transmitting only “facts” about people, not their feelings. The rest must be deduced from a mutual familiarity with interpreting visual expressions like gestures, tone of voice, eye aversion, latencies, etc.
Touch is so much closer to the exchange of information within thought than any other sense, because it is the only sense that is bidirectional without alteration or transduction. Whatever we transmit is what we also feel or would receive. The other senses are really for confirmation not transmission – but we have come to rely on the supporting modalities, as the concept of becoming civilized has distanced us from being human – vulnerable - to one another. In some ways, we are isolating ourselves from one another via technology and the need to do things “faster,” more efficiently. Affection takes time to express. We do not seem to have time for it anymore.
But if we take the next natural step and connect primarily via touch (reinforced by vision and sound), then parents overseas can reach children at home, and those who cannot receive or express by any other means can find hope again within the meaning expressed by comforting haptic connection. Maybe we can get back this missing dimension of distance communication – the experience of emotional attachment/reassurance itself. May I ask, Mounia – is this something of interest to you as well?
We have developed libraries that allow the user to manipulate the viewport (virtual camera) and/or 3D objects with use of haptic devices (one or two Phantom Omni devices, depending on configuration). Now, we would like to test the set-up on some "haptic beginners" to assess the efficiency, ergonomics and the learning curve.
We have prepared a "put a peg in the hole" exercise and are going to test the accuracy and trial execution time.
What other exercises would you suggest? Do you know any standard procedures of such assessments?