Science topic
Eye Tracking - Science topic
Explore the latest questions and answers in Eye Tracking, and find Eye Tracking experts.
Questions related to Eye Tracking
Hello,
I know that the Eye Tribe was acquired by Oculus but is it still somehow possible to purchase it somewhere? It can be second-hand too if there is someone willing to sell theirs. If not, does anyone know any other affordable eye tracker brand? I would only like to use it for demonstration purposes, not for scientific research. It would be nice if I could use it with Ogama software.
Best,
Merve
I am working on a study that involves collecting synchronized EEG and eye-tracking data integrated within the iMotions software to examine cognitive workload. I have set event markers to ensure precise synchronization between the data streams. However, I’ve encountered an issue where one data stream (e.g., eye tracking) contains missing values while the other (e.g., EEG) is complete, leading to partially incomplete rows in my dataset.
I would appreciate advice on:
- Best practices for handling missing data in synchronized multimodal datasets with event markers
- Any workflows or tools you’d recommend for preprocessing and aligning multimodal data in this context.
Any insights from those experienced with multimodal data analysis, would be extremely helpful. Thank you!
Hi everyone,
I am exploring eye trackers and I am specifically looking for current models that could be compatible with the OGAMA software, such as Tobii Eye Tracker 5 or GP3 HD 150Hz. Has anyone recently worked with any eye trackers and confirmed their compatibility with OGAMA? I'd appreciate any recommendations or insights, especially with newer models.
Thanks in advance for your help!
#EyeTracking #Ogama #Tobii #Gazepoint #iMotions
Hello all,
Currently I am working with eye-movement data collected using the Tobii Pro Glasses 3. We have a clinical group and a healthy control group who performed a series of tasks while equipped with these eye-tracking glasses. Task durations were not controlled as this was also a measure of interest, so participants took different amounts of time (seconds) to complete the tasks (in general, the clinical group taking a longer time to complete the tasks than the healthy control group). We want to compare the number of saccades and fixations performed for each task between the two groups. Since the time taken to complete the task would naturally contribute to the number of eye movements performed, is there a way to perform this between-group comparison while also taking in account this variation in time? Any suggestions would be very appreciated!
Thanks,
Saee.
Has anyone used (or attempted to use) a touch-monitor or a touch-screen-laptop for the recording of behavioral responses in an eye-tracking experiment with young children?
This would be ideal since young children cannot use a mouse or a keyboard. However, I am worried about the technical setup, e.g., the distance of the eye-tracker (50+ cm) vs the length of a child's arm (21-22 cm).
Any advice or experience is welcome! Thank you
Hi, I am looking for a place to buy a used Pupil Core eye tracker glasses or a similar system for outside wear. Does anyone have any ideas, or ones to sell?
I tried Labx.com, are there other similar websites?
Thanks!
How are researchers using experimental methods, such as eye-tracking or neuroimaging, to investigate phonological processing and perception?
Hi,
I have pupil size data grouped by trial. In the trial validity estimation process, I currently focus more specifically on the recorded/missed data ratio criterion. Is there any recommended or agreement about a threshold for this ratio ?
Thanks
I am doing a study on the different facets of metacognition and plan to measure metacognitive skills through eye tracking. The reference task is going to be a problem verification task (subjects view math equations and are asked to answer whether the result is correct or not). Given the fact that eye tracking has not been studied extensively on this specific topic, are there any recommendations on which indices to use for each metacognitive skill? I was thinking the following: Prediction: Number and duration of fixations (more=harder) +prediction of solvability Confidence Judgements?
Planning: Number of fixations in relevant AOI's (e.g. the operator)
Evaluation: Number of regressions to and from solution
Monitoring: prolonged fixations or fixation clusters
Any input would be greatly appreciated, thank you for your interest and time!
Hello everyone, we are doing an analysis of the data exported by SMI-redN desktop eye tracker, hoping to find a suitable eye movement data analysis kit to carry out.
The exported Data is currently in txt format (including Raw data, Event Statistics-Trial Summary, Event Statistics-Single) and idf format, and we hope this toolkit can be compatible. And AOI analysis, fixation recognition and other eye movement indicators can be analyzed.
Any help will be appreciated.
Hello, I've acquired a handful of Mirametrix S2 eye-trackers and would like to use them in lectures/workshops to demonstrate the principles of eye tracking. I realise they are out of date, but was rather hoping someone has a copy of their driver install files and the original eye-tracking software. I have contacted the manufacturer but I do not hold out too much hope.
Is there a way to record time stamped video from the webcam (considering the dropped frames)?
Hi everybody, I need some help with an analysis of pupillometric data; it’s the first time that I use pupillometry, so I hope I didn’t make too many mistakes or at least that they won’t jeopardize the whole analysis.
I ran a between-subjects experiment in which the participants watched the same visual stimulus in three different conditions; during the stimulus presentation I recorded their eye-tracking data. I'm very interested in pupillometry but here's my problem:
- the software I use (iMotions) provides me with the aggregated and auto-scaled data for each of the three conditions: these data are apparently very clean and consistent (there has to be some kind of automated correction of blinks and artifacts).
- The software output basically has two columns: timestamp (in milliseconds, identical in the three conditions) and pupil diameter (in cm, strangely enough, but never mind…)
- I ran an ANOVA with the condition as the factor and the pupil diameter as the dependent. variable, F(2, 5141) = 119.38, p < .001 ηp2 = .044, (1-β) > .99. Bonferroni corrected post-hocs were all significant p < .001 (see graph 1 in attachment)
- I got suspicious: the significance was too high and, above all, the three conditions do not start at the same point on the y axis (ycond1 = 0.47; ycond2 = 0.47; ycond3 = 0.50). I thought that maybe the significant difference could be due to this (let’s say the participants in Condition 3 had larger pupils for some reason); so, to baseline the data, I tried to let them start at the same point.
- To do this, I rearranged the columns for them to show me not the pupil diameter but the pupil dilation; I organized them so that the new y value of, let’s say, x = 1 (time frame = 132) was the old y value (pupil diameter) minus the value of y with x = 0) (see attached screenshot) In the example, for condition 1; the new value for timeframe 132 is: 0,48 - 0,47 = 0,01.
- I ran the same ANOVA and now the results appear to be more reliable, F(2, 5141) = 42.15, p < .001 ηp2 = .016, (1-β) > .99. There has been a drastic decrease in the F value and partial eta squared. Bonferroni corrected post-hoc analyses revealed that the condition 2 only was significantly different from the other ones (p < .001) (see graph 2).
…and now…question time!
- Would you say that this procedure is right? I guess there could be many errors in it, but I’m not an expert and I didn’t manage to take great advantage from reading many papers on this matter.
- Would you have trusted the ANOVA results at point 3? Or rather I was right to baseline those data?
- To baseline the data, I acted freely and according to nothing but a rule of thumb that came into my mind. Would you suggest other processes?

Does anyone have an idea on whether an eye-tracker would burden the energy consumption of an electric vehicle, also considering that the eye-tracker would have a very simple algorithm versus a more complex one?
For instance, an eye-tracker looking at much time a driver looks at the centre of the road versus how much time a driver looks at the centre of the road, their mirrors, the instrument panel, the centre stack…
I'm looking for a database with faces in different angles, different emotional expressions and adult male/female, white, all with a neutral background. This is an eye-tracking study and we are looking specifically for the different angles, at least 3. Thank you for any leads!
Hi,
I am trying to analyze the duration of the gaze toward a specific object. Our lab has the Smart Eye Pro systems (mounted on a driving simulator), and I would like to find a way to annotate the locations of road users/objects by time so that we can track their locations video-frame by frame and determine whether the gaze landed on the road users/objects.
That being said, is there any video annotation software to save the coordinates data of the road users frame by frame?
Thank you in advance!
I will conduct stimulated recall interviews with graduate students after eye-tracking. The interviews will be prompted by each participant’s eye-gaze replay in Tobii Studio. The data I will collect from these interviews, can I analyze them using thematic analysis?
I am working on Eye tracking data from Tobbi Eye tracker. I need to generate heatmaps and scanpath for visualization. I have fixations coordinates for left eye and right eye, and timestamp for each fixation. I need to calculate saccades to generate scanpath. Please let me know if anyone know how to create heatmaps and scanpath ?
when we have only fixations and time.
Hello guys,
I am really new to MNE and EEG topics. I am doing my master's thesis in EEG and eye-tracking. But I am finding it a bit difficult to import and evaluate my collected data using MNE. I have data which includes excel, text, and psydat files. and one is .ipynb file. I saw many tutorials and the documentation also. I am a bit confused about how to import 1st. If somebody can help me that would be really helpful.

Hi everyone, I'm looking for recommendations about Eye Tracking Devices, specifically for use in behavioral and physiological research. My plan is to register eye movement, pupil size and EEG recordings while participants complete computerized behavioral tasks.
Really appreciate in advance any info about brands, companies, prices and prior experiences using the product.
Hi all,
I am looking for a toolbox or any tool that would allow me to extract pupil size and changes in pupil size of a subject depicted in a video. I have a set of videos, each one depicting a single subject and their eyes are visible throughout the video. I would like to be able to extract the information of the pupil size and change during the video and have that in a set of numbers that I can potentially plot, analyze and correlate with other variables.
Please note that I am NOT aiming to track pupil size of an observer (watching the video), therefore standard eye-tracking devices won't work for my goal. I am instead interested in the pupil size of the subjects depicted in the videos per se.
Thank you,
Best wishes,
Helio
Hello community
Has anyone experineced the tracking of a moving object with the new Pupil Labs invisble eye tracker ?
I defined my screen computer as a surface with QR codes but now, within that surface i would like to have a dynamic AOI (a moving car on the screen).
Do you know if the Pupil ecosystem has somrthing built-in for this ?
Thanks
"Behavioral 'science'" offers close to nothing for Artificial General Intelligence (& I believe eventually any good influences might well be FROM AGI to Psychology). One quite possible example:
My guidance for behavior science, even if not verified OR falsified by Cognitive Psychology "folks" (because they are stuck in non-rationally-justified RUTS), could just be "aped" (that is, guessed at) and improve AGI (and progressively more and more, even by trial-and-error). THEN, instead of AGI looking to Psychology, rather, as in the past with ACT* (information processing science), Psychology could learn a LOT from AGI .
My way for better Psychology is self-guiding emergent ways (self generative processes -- which are some quite possibly clear things (with KEY overt manifestations, that unfold with ontogeny -- initially perceptual/attentional phenomenon). I would look for such for Psychology as a Cognitive Developmental Psychology person, but I am old and retired.
It seems obvious to me that this is exactly what Artificial General Intelligence NEEDS -- one clear thing: self generative processes with AGI ontogeny (emergent, unfolding processes at proper points). Intelligent things show creative self-guidance ...
Dear fellow researchers,
I am looking for some advice on eye-tracking enabled VR headsets. Currently contemplating between HTC Vive Pro Eye and Pico Neo 3 Pro Eye... Both have built in eye tracking by tobii. Does anyone has any experience with any of them? Or can recommend any other brands?
We are planning to use it for research in combination with EEG and EDA sensors to assess human response to built environment. Any advice is much appreciated.
When you set up an experiment, with "defined" "stimuli", these are the stimuli in YOUR imagination and/or YOUR model.
BUT: very often it is a matter of representation (from long-term memory) of the circumstance(s)/setting(s), AND the stimuli can only be understood in THAT context -- the context of the content of developed representation of such circumstances/settings (think, for example, of problem-solving). The Subject, in most significant settings, has her/his representation of such circumstances/situations/settings. THAT actually more than helps to properly define the stimuli , for such is often the MAIN THING for defining (recall that it is the Subject (surrounding behavior patterns) very often _THAT_ MUST, in science, be what allows any empirical or true definition of stimuli).
All this is outlined by, and fully consistent with, Ethogram Theory (see my Profile and, from there, read A LOT-- I do provide guidance on readings order). The Theory itself is internally , and likely externally, consistent and it is strictly empirical (in the grounding/foundation of ALL concepts -- i.e. ALL clearly linked to directly observable overt behavior PATTERNS); and thus, given all those characteristics, there are hypotheses that are clearly verifiable/falsifiable .
Is there reason to believe that data, available or possible, from eye tracking is far greater than what is utilized? YES ! :
Computer scientists tell us that ANY similar or exact patterning of visual perception or attention, with _ANY_ overt manifestations, can be captured. Unquestionably much develops from input through the eyes (the MAJOR example: ontogeny); plus, behavior IS PATTERNED (as would be true for any significant biologically-based functioning (and ALL behavior is)). AND, ALL such could/can be found/identified using eye tracking and computer assisted analysis. ANY/ALL. Thus, it would be useful for psychology to capture any/all such. (It would be more constructive to start with analysis including most-all subtle behavior patterns; that avoids at least most unfounded a priori assumptions (actually: presumptions).)
Unlike modern assumptions, little is likely just random; and YET ALSO, for-sure, little is just statistical. (Nature doesn't play dice.)
True, this is self-serving (for me, for my definitely empirical theory) BUT IT IS ALSO TRUE.
Dear all,
We are preparing an explorative pupillometry analysis of previously elicited eye tracking data (Tobii T1750). Since we have no experience with pupillometry, we would be very grateful for handy tips or, if possible, some R-scripts for Tobii data.
Thank you!
Best,
Friederike
Hello,
I'm carrying out a research in visual perception in children with autism and I need to acquire eeg signal during an oddball paradigm in combination with an eye tracker. I'm using a biosemi 64 electrodes cap and the eye tribe eye tracker. At the moment I don't know how to synchronise the data with a reliable accuracy, since I work with ERPs and perfect timing is fundamental. Thanks!
Hello everyone!
I am interested in using eye-tracking in young infants (a few months old). Does anyone have any experience or concrete advice concerning the calibration of the eye-tracker?
I have read descriptions about using real toys (e.g. moving behind a glass on which 5 dots mark the looking positions, or through the holes of a cardboard panel) but I would love to learn more details from people who have worked with these ages (distance from baby, distance between the dots/holes, any other method, etc.).
Thank you for any advice!
Alexandra
Hi, I was reading some to papers that use machine-learning approaches in automated emotion classification tasks, but they don’t specify which eye-tracking variables are the most informative to successfully infer the person’s affective state.
I would like that anyone recommend me papers (articles, books or chapters) that report associations between other eye-tracking measures, (besides pupil size, e.g. fixation duration, saccades, blinks, etc.) and affective variables (valence, arousal, or specific emotions)?
I'm designing a study that assesses L2 learners’ ability to pay attention to contextual cues when performing a communicative function (e.g., apologizing, thanking). I’d like to use a short video clip to see what they fixate their eyes when they produce a brief speech. Does eye trackers work on a video input? Also, is there any rule in terms of the length of input? I’m thinking about a brief background scene (5-10 sec.) to contextualize the situation and then leading to the critical scene where the picture is paused and participants need to produce the communicative function directed to the person in the video. The focus of interest is the critical scene, but I’m wondering how long is the sufficient background before the critical scene.
Can you realize "top-down" and "bottom-up" ARE [ or certainly can, if not MUST, be ] THE SAME THINGS at important junctures IN ONTOGENY (child development)?
This Question is NOT addressing YOU (the "self"), your social relations and activities, NOR your language. This question is about the biological processes SHOWN IN BEHAVIOR PATTERNS _PER_ _SE_ of the organism (aka "just 'behavior' "), DURING ONTOGENY, and beginning in overt and observable ways. As words are tools, to express certain things, sometimes (and even and especially at some critical times) the words used will seem contradictory or an oxymoron ,(e.g. it is hard to truly well-imagine a case of perception beginning thought). This cannot be viewed as a real problem. SO: at important key 'shift' points in development, what we CONCEPTUALIZE as "top-down", may have their actual key inception in what, in the highly [overt] behavior-related processes, may fundamentally have to be seen as "BOTTOM-UP". Major (if not THE major) shifts in behavior PATTERNS during cognitive development (of emerging seemingly qualitatively different stages/levels) may certainly have their inceptions in BASIC perceptual shifts (actually seeing new things or some things in a significantly new framing perspective AS new (or, in other words, the latter: "as seen anew")). [(THIS is seen as possible, if not necessary, if only by the reasoning processes of EXCLUSION -- if you are an empiricist/scientist.)]
With this perspective: the UN-defined bases of cognitive stages (equilibrium type 2, the balance between the stages and the point allowing for the stage shifts) is both more simple AND more researchable (with eye tracking) than anything conceived in academia heretofore. In short, this perspective is much more strictly empircial AND TESTABLE. [ Piaget clearly, yet ultimately, ONLY ever said one thing about such stage shifts: that they were "due to maturation" -- Piaget realized this was the most serious deficiency in his theory to the end of his days (explaining why his LAST BOOK was on Equilibration). Piaget was big on "formal logic", which inherently, as applied, results in embracing limited content -- for THAT (as applied) is OF our normative conceptual system, not of independent, actual real biological systems).]
To get more perspective of my view and approach, _start_ at: https://www.researchgate.net/post/Why_an_ethological-developmental_theory_of_cognitive_processes_and_of_cognition and READ all the Answers (follow-ups) and "go from there".
The same-different task requires to subjects to indicate whether a pair of stimuli seen or heard are the same (say AA or BB) or different (say AB or BA). Researchers often collect offline measures (e.g. response accuracy and latency) in the task.
Is there a way that I can collect online measures using eye-tracking, ERP or some other experimental techniques in psychology? In other words, instead of people reporting whether the pair of stimuli are different, I hope to infer their knowledge based on their fixations and brain potentials. Please recommend papers that I can read (if any). Thank you!
Given the nature of emotionality, an analogous application (of reactions, then abilities), across differing domains, may occur by "seeing" an analogy (or analogue) with some of the representations the organism ALREADY, itself, has achieved with ontogeny (NOTE: new representation levels/stages, at inception, are VIA perceptual shifts)
It is hard to see "domain generalization" of skills occurring across greatly diverse spaces AND times just based ONLY on objective, key similarities in the environment (<- though THAT may always, or in-effect, be true) . It seems we may need processing that provides analogies to what one already knows, AND * THAT * "seen" in the environment, to "generalize" applications of certain cognitive abilities ** -- although I am more-than-reluctant to posit this in-advance. Still, an idea for how such analogy-with-the-already-represented can be seen as clearly possible because of the varying situations/circumstances that can trigger a seemingly same particular emotion (and seen as particular by Psychologists). This may be harder to see with those 3 "limited" emotions always present with key learning (and, the ONLY ones I admit may OR that must be involved, i.e. interest-excitement-anticipation and surprise and joy ). BUT my point seems clearer when one reflects on what causes anger (possibly a somewhat "more advanced" emotion, though many/most still don't see that as a secondary emotion, but, rather, primary ***). Still, one may well see this point (my point here) in the case of "guilt" (definitely a secondary emotion) (and the analogies essentially applied between understood circumstances and new circumstances, that may change over ontogeny).
** FOOTNOTE : This may be the very reason I eventually admitted the set of 3 very basic and likely always-present [with key learnings]: emotions (noted above).
*** FOOTNOTE: I, myself, see distress and frustration (NOT anger) as primary (but NOTE: These, just mentioned, are NOT among the THREE (see above) that proactively impel the organism to discover, thus not directly involved in "seeing" things (or combinations of things, etc.)(but these surely may be associated with needed inhibition, so indirectly involved and also key, that way))
Hello researchers,
I want to design an eye-tracking study to collect and analyze gaze data at the word level. I am aware of the effect of pixel density on the font size shown to the participant and consideration of eye-tracker accuracy to separate the data of each word. What I read shares the font size they have used to create an easily readable text or compared the reading time of different font sizes. In other words, what I found is not focused on gaze data at the word level but gaze while reading a text, in general.
MY QUESTION: what would be an appropriate font size for a word-level eye-tracking study? Or how to find that size?
Thank you.
I would like to test capture EGG of users while they are using a sound interface, so I would like to know if EMOTIV with 5 Channel is enough or I need to think buying EMOTIV with 14 Channel or other headset
Thank you!
I'm doing research on eye gaze tracking using Tobii eye gaze tracker 5. My programming language is python. Still I unable to connect the camera with python, and I'm struggling to do the stuff. Please guide and help me to proceed with my research with python.
Since with the pandemic it doesn't look like we are going back to the lab soon, in my lab we were looking for solutions to conduct remote webcam-based eye-tracking. I've been looking at GazeRecorder and Sticky from Tobii, but I am not sure either is suitable for scientific research. Do you know any options? Have you used either GazeRecorder or Sticky?
Thanks in advance!
Conferencias
- 04/11/2020. 18:00-19:00 h (Zona horaria / Fuso-horário: UTC-3)
- Conferencia 1: ¿Escribo siempre igual? Efectos de las tareas en la organización temporal durante la escritura. Dr.(c) Ángel Valenzuela (UTal y UAut, Chile)
- 18/11/2020. 18:00-19:00 h (Zona horaria / Fuso-horário: UTC-3)
- Conferencia 2: ¿Cómo se revisa un trabajo final de grado? Operacionalización de eventos de revisión en tesis de licenciatura utilizando técnicas de registro ocular y de teclado. Dra. (c) Sofía Zamora (PUCV, Chile)
- 02/12/2020. 18:00-19:00 h (Zona horaria / Fuso-horário: UTC-3)
- Palestra 3: A investigação dos processos de revisão on-line: um estudo com alunos universitários. Dra. Erica Rodrigues (PUC-Rio, Brasil)
- 09/12/2020. 18:00-19:00 h (Zona horaria / Fuso-horário: UTC-3)
- Conferencia 4: ¿Qué revelan los gráficos de un keylogger sobre los procesos de escritura? Dr. Luis Aguirre (UNCuyo, UDA, Argentina)
Organización / Organização: Red Latinoamericana de Investigación Experimental en Escritura (ReLIE-Escritura)
Coorganización / Co-organização: Facultad de Filosofía y Letras, Universidad Nacional de Cuyo, Argentina; Pontifícia Universidade Católica do Rio de Janeiro - PUC-Rio, Brasil; Pontificia Universidad Católica de Valparaíso, Chile; Centro de investigación en ciencias cognitivas -CICC- de la Universidad de Talca.
Enlace al Zoom del evento / Link para o Zoom do evento:
https://puc-rio.zoom.us/j/96696763126?pwd=d2RZc1RjLzdxQ3BUUUpxS0EyNjBhQT09
E-mail: relieescritura@gmail.com

There are applications that allow the use of eye tracking on smartphones.
So far I have not tried or tested any of these applications. Based on my experience with webcam eye trackers I suspect a rather weak accuracy and precision.
- Who has experience with eye tracking on smartphones?
- What can you expect from eye tracking on smartphones?
- Can eye trackers on smartphones be used for practical purposes? (e.g. tracking how a user finds an icon)
I am currently exploring which software is available to support the analysis and visualisation of eye tracking data gathered in VR studies. The software would have to be compatible with HTC Vive Pro Eye.
Do you have experience in this regard? Your recommendations are very much appreciated!
P.S. Perhaps HTC offers some analytics as well, but I cannot find a detailed description, nor how useful these are for research purposes.
Kind regards,
Lizzy
Hello,
We have a setup of iMotion and Emotiv EPOC that a previous faculty was using for attentional and emotion recognition purposes. As I understand iMotion gets the Emotiv data and timestamps it, however I want to conduct some research about ERP and BCI (For e.g. P300) where syncrhonisation is critical, it is possible to accomplish this using iMotions and Emotiv EPOC+? I would greatly appreciate your help.
My study is to use only fixation and saccade data. But after removing :unclassified' and 'eye not found' data. I come to know there is 2 back to back fixation duration or saccade what I need to do.. If I'm calculating subsequent saccade (between fixation) by using 2 fixation.
Should i consider 1 fixation duration by adding 2 fixation point.
Plz check the image attached.

The emergence of assistive robots presents the possibility of restoring vital degrees of independence in daily living (ADL) activities for the elderly and the impaired. Although people can communicate their wishes in numerous ways such as bodily expressions or actions, linguistic patterns, and gaze-based implicit intention communication remains underdeveloped.
I'm focusing to develop a new, tacit, nonverbal communication paradigm for Human-Computer Interaction (HCI) is implemented based on the eye view. To achieve high-performance and robustness, conventional gaze detection technologies use clear infrared lights and high-resolution cameras. These systems, however, require complex modification and are thus limited in laboratory study and difficult to implement in practice.
I'm looking for follow up the Tobii Eye Tracker 5. The Tobii Eye Tracker 5 recently released and still undergoing research and still, I haven't found any research related to obtaining eye gaze-based implicit intention communication and human interaction.
Editor/Co-author of my Collected Essays (on behavioral science) Needed
I have approximately 1000 pages of essays on new, more-empirical perspectives for Psychology (esp. General Psychology and Developmental Psychology -- but relevant and important for Psychology in general). It is all about BEHAVIOR PATTERNS (and associated "environmental" aspects, these _OFTEN_ broadly conceived) and a science of finding the further behavior patterning therein, and a patterning of those patterns, etc.; AND THAT IS ALL : In other words, the writings outline the discoveries likely possible and necessary for a true and full behavioral science of BEHAVIOR PATTERNS ("just behaviors") PER SE ("behaviors" then seen, as must be the case, as aspects of Biology (adaptation) unto themselves); it is much related to classical ethology perspectives and research. RELATED TO ALL THIS: There is an expressed great hope for some technology being the "microscope" of Psychology for good/closer/better and/or NEW observations; there are likely sets of adaptive behavior patternings and associated environmental aspects within quite-possible, if not VERY likely, SETS of situations (with the important "environmental" aspects/circumstances there, BUT the KEY environmental aspects will also be across KEY related/in-some-ways-similar -- and memorable -- circumstances). This is how/where related behavior patterns COULD COME TO BE OBSERVED in situ, AND even seen as they develop : even the subtle behavior patterns, etc., therein, truly-seen and clearly seen and truly and fully discovered _and_ seeing some key adaptive "operations" thereof. AND there is some detailed phenomenology described that allow one to arrive at testable hypotheses and then also indicating how this same basic sort of essential observations shall also naturally PROVIDE the actual ability to test these testable/falsifiable hypotheses.
I am looking for a skilled reader and editor to read/edit my written works AND THEN put them together in a most sensible manner. This person must know the field of Psychology as a whole and must understand possibilities of ontogeny. Also she/he should have a healthy respect and very high regard for KEY foundational observations (always such AS CENTRAL). Know of the Memories (all the sorts, now rather well-researched) as providing for phenomenological EXPERIENCE ITSELF and for connections, as indicated above.
Any one "fitting this bill" AND WILLING, and otherwise ABLE, I would gladly have. Doing such substantial editing/proof-reading/rearranging/publishing is enough for me to see you as a co-author and therefore I would put you as second author on all the book's covers. After publication, you (given details we shall decide upon well ahead of time) shall have a good and fair portion of any money reaped.
Hi, everyone,
I am doing research about tracking the eye-gaze? What is the best camera for tracking eye-gaze?
I am seaching für eye-tracking studies. It would be good if the research concerns also peripheral vision. The principal intention is to evaluate and to compare different eye-tracking tools and to show in a comprehensive and comforting manner the benefit of eye-tracking tools!
We are currently looking for a new mobile eye tracker. In the past, we used the SMI ETG. Since this company was bougth by apple, it does not provide support anymore. Therefore we are looking for a new company that provides eye trackers as well as supporting software. What experiences do you have? What are the best options available at the moment?
Can anybody report on experiences concerning VR specs with integrated eyetracking? Does anybody know, how much such devices cost? To me it appears quite plausible to assume that this way of presenting stimuli and recording eye movements in a psychological experiment would be quite practical and reliable at the same time.
Hi everyone,
In eye-movement tracking studies with babies, it is sometimes difficult to get a perfect calibration. I wonder if there are well-established criteria, thresholds or recommendations for excluding calibrations.
Any input - tutorials, method reviews or drawn from researchers' own experience would be very helpful.
Second, has anyone experienced slighted shifted calibrations (i.e. experimenter perceives that the baby is looking at the right target but the ET maps the eye-mvt with a shift, e.g. to the right direction probably due to issue with the initial calibration)? Are there ways to correct those or shall the participants' data be discarded?
Many thanks in advance for experienced input.
Quick answer : NO (and why on Earth would you expect we are? (or that we on ourselves, by ourselves, naturally would be? <-- sounds like old-time junk philosophy to me). And this will remain the case without good directed science -- and , as yet, some of the very most-central studies are not only yet to be done, but yet to be envisioned or accepted by our near-medieval present Psychology. ( [Some of] All that is modern can VERY WELL NOT be congruent with all-else that is modern.)
[ ( Title of this post intentionally made to mirror de Waal's book title: Are we Smart enough to know How Smart Animals Are? ) ]
See a good portion of my writings (all available on RG) for more.
I am currently writing my master thesis on eye-tracking and I want to run a visual search task on a set of products on digital display. I want to record eye movements and evaluate based on fixations and saccedes thorugh webcam based eye-tracking. Then I need to analyze the data, possibly on SPSS. The sample size is 50 participants and I will provide them a questionnaire. Do you know which is the best software I could use to programme the experiment? Or a good company I'd need to contact?
Thank you in advance.
I have two groups (10 participants in one and 12 in the other), each one viewed 1 stimulus (web page that present 4 products to choose between). One of the web pages has an orange label under one of the products while the other doesn't (that's the only difference). For each participant I have the number of fixations made on each product and I know which product they chose in the end.
My hypothesis is that the group that has the stimulus with the orange label under one product would more likely choose that product.
Is there a statistical test to compare these groups and see if the orange label actually has influence on the choice?
thanks a lot.
iviewXAPI.dLL is loaded. Despite seemingly correct IP-adresses, the connection always fails.
Best regards and thanks in advance
Hello researchers!
I'm looking to acquire some general commentary (pros/cons) from researchers either using Tobii Glasses Pro or the Smart Eye Pro, and other setups!
I'm curious about things like sampling resolution for studying fixations and saccades, current gaze accuracy and precision, calibration procedures, presence of company support, but am open to hearing about other things as well.
Our context: driving simulator with wraparound display.
Generally, I've understood that Smart Eye Pro is more reliable and pragmatic, but also more expensive. Wondering whether the extra cost merits the added degree of quality.
Thank you!
Why is there a bias against inductive reasoning and in favor of deductive reasoning in the social sciences?
First, to establish there IS a bias:
It is OFTEN said (really as if it were a defining [damning] condition) that : induction or inductive inference is "made under conditions of uncertainly". Then, in describing deductive reasoning/inference there is typically NO SUCH mention of uncertainty. What? Just because one (or one and her associates) comes up with a hypothetico-deductive system of thought _THAT_ SOMEHOW REMOVES UNCERTAINTY??? This is NONSENSE -- YET this [at least] is a very real AND DESTRUCTIVE "Western" bias: that when you develop some system to think with/in from WHATEVER actual data, then you, simply because you are now thinking in/with that internally consistent system, you will develop clear hypotheses _AND_ (as the bias goes) THESE WILL LIKELY BE TRUE (as shown via their "testing" -- and, no matter what standard of "testing" you have com up with). (Descartes would have loved this.)
Now look at some of the TRUTH that shows this is VERY, VERY likely an VERY unwarranted bias and it is quite conceivable that the opposite is true: Decent Induction shows more clarity, reliability, predictably, and inter-observer agreements THAN almost all deductive systems.
If in certain circumstances/situations a behavior PATTERN(s) which can be specified and has a directly observable basis, then induction can show GREAT inter-observer agreements _and_ this is sure-as-hell just as strong (actually, likely stronger) a result (reliable, agreeable result/finding (discovery)) than most any p<.05 results found when testing hypotheses that come out of a hypothetico-deductive system . All you jackasses that cannot think that way should establish a re-education camp FOR YOURSELVES or have nothing to do with science (other [real] scientists rightfully shun and ignore psychologists at any conference on science, for scientists in general: They sense OR know what I am saying.)
Yet, indeed, this very ridiculous bias leads people to come up with models where ALL concepts are NOT clearly rooted/beginning in directly observable overt behavior [PATTERNS] (I have even read one group of researchers, who wrote a paper on the difficulties of understanding ABSTRACT CONCEPTS, trying to "define" abstract concepts (and thinking) saying: "I think we should develop a thorough MODEL FIRST" (meaning: NOT after findings/data, but develop the model FIRST and, only then, look for the "behaviors". This is empirically unacceptable to an extreme. I believe such thinking would make Galileo throw up.) I have argued that a model cannot be good, unless ALL concepts ARE rooted/founded/based/stemming from directly observable overt behavior (again actually: behavior PATTERNS). The fact that so very, very little research is discussed, during the conception of a MODEL (OR afterward), in terms of behavior PATTERNS indicates an absolutely fatal problem (fatal to any hope for a science of Psychology). Still, today, Psychology is Medieval.
This "Western" society is presently (STILL) so sick (crazy -- like Descartes would likely be considered today) TO HAVE ANY POSSIBILITY TO HAVE A SCIENCE OF PSYCHOLOGY.. "Mere" BUT ABSOLUTELY ESSENTIAL OBSERVATIONS (and some associated discoveries) ARE NOT SOUGHT. (I believe if Galileo were here, he would say we have not yet made a decent start for a science of Psychology.)
What is true is that we will never, without proper bases and procedures, EVER understand important behavior patterns (and what aspects of circumstance(s) are related to them) EVER . (I shall not elaborate here, since so many want short answers (and ones damned close to those they have heard/"learned")).
Like other parts of my perspective and prescribed approach, this view is UNASSAILABLE !
Let my other thousand, or so, essays reinforce and trumpet what I have said here (they are all consistent with all my points and with each other, and these essays are here on RG).
P.S. Behavior patterns PER SE are an aspect of Biology, and very likely recognition and discovery of behavior patterns can ITSELF (alone) provide a full science. If you think of "Biology" always as something else then recall the re-education I have suggested.
The camera should allow the recording at high resolution as well as high number of frames per second. It should also allow the transfer of feed to the computer at high speed.
Thanks.
I am currently in a state of denial regarding the unavailability of this affordable and valid eye-tracking and pupillometry tool. Alternative suggestions welcome!
Hi everybody !
Here's my first question on this great plateform :
What eye-tracking devices to choose ?
Actually, I'm interested on pupillometry and fixation time/saccade. With my future team we will do IN LAB test, in front of a screen (so I think glasses are not necessary).
With my lab, we are thinking about buying a screen based bar. With the software we use, we can have a
- Tobii Bar X2-60
or a
- GazePoint GP3 60 Hz
I wanted to know if someone has any recommandation ? :)
The main point in my mind is that, it seems that we can't access to Tobii raw data, only data transformed by Tobii algorithm .. And to publish (because it's part of the point) it could be problematic because this algorithm is not validated in any paper ... am I wrong ?
Thank you very much for your answer,
Cyril Forestier
The nature and bases of abstract thought and processing can't continue to be unknown or confusing; we must relate each inception to key directly observable overt behavior patterns (and corresponding environmental aspects, or rather: often aspects of multiple circumstances). These are the EASIEST and yet , I believe, some of the BIGGEST "PROBLEMS" we yet have to solve (STILL, to-date) : SUCH very CENTRAL SETS OF DISCOVERIES THAT MUST BE MADE and they have not yet been well-attempted; there is NO reason such searching for the key observations, looking to establish key discoveries, cannot be attempted, especially now, with modern technologies (eye tracking , etc.); there are ways to solve this sort of problem which we have had historically and in philosophy and within the limits of our "labs" (at least given conceptualizations thereof) -- all these negative views, placing artificial limits on theorists'/researchers' imaginations. **
These are central problems for Psychology in general and for General Artificial Intelligence.
I have proposed, as something central to discovering such "starts" for each level/stage: doing better for ourselves, with recognizing/developing a better or more open and true conceptual structure, for self-understanding, basically: trying/having a much better imagination about imagination _AND_ seeing our Subjects, themselves similarly, having the Memories (imaginations) with the needed spans and scopes, across and between sets of circumstances -- all in a real empirical concrete phenomenological way (and clearly a possible way). (Again, this is for ourselves, for really recognizing all the capacities of the human and the Subject; this would be coming to see that our imaginations (the Memories) can very well "time travel" back and forth through represented circumstances (in the "mind's eye") TO see aspects that only in those multiple contexts (which may superficially seem to be quite different) are VERY meaningful -- where only there (altogether considered-together) -- ARE MEANINGFULNESS-es resulting in abstract understandings, and abstract terms and processes (<-- thinking in/about such multiple circumstances and in those terms).
[ Any notion that ANY concept does not have an important basis in concrete circumstances OR (similarly) the unfounded, self-limiting notion that some abstractions (abstract terms) are not related to ANY specific sets of real features of situations or circumstances IS FALSE AND DEBILITATING. Even the strangest of our abstractions MUST be founded/grounded/or starting-IN directly observable overt behavior patterned responses (circumstances, properly considered). The old-fashioned arrogant, yet very limiting, way of thinking of many historical philosophers MUST BE ABOLISHED. The old-time thought is neither empirically or biologically sound.
** FOOTNOTE: [(Also, by the way, it is even quite conceivable that some discoveries of some key situational circumstances (even if, also, related to more) and related to key pivot points for/of some behavior patterning shifts and the new beginning understandings of "things" of/in KEY circumstances may even be possible to make in the lab [settings].) ]
Wouldn't experimental psychology (the "lab" setting) have a necessary bias AGAINST the existence and availability of some SKILLS & against any thinking of (across/about) multiple circumstances?
I contend: There are some skills developed (or discriminated) across circumstances or between circumstances, that develop over more time and/or more circumstances (usually both), than can be detected or manipulated in the "lab" (using presently used procedures, at least) . AND, there may well be thinking of concepts FORMED (naturalistically) ABOUT existing or not existing "things" AND/OR (also) relationships (relatedness (or NOT)) which involve mentally comparing [representations] between situations/circumstances that are very important in REAL, ACTUAL conceptualizations and thinking (in real "internal" phenomenology -- though based on ACTUAL EXTERNAL SITUATIONS/CIRCUMSTANCES that could be seen if OBSERVATIONS were more realistic __and__ [(relatedly)] imagination about imagination was more reasonably thorough). WE CANNOT SEE THIS (presently); we may NOT MANIPULATE THIS action by the organism IN THE LAB.
There is no doubt we (including AT LEAST even older children) must, can, and do these things BUT WE CANNOT DETECT (measure)(yet, at present) any KEY behavior patterns related to such activities AND we cannot, and will not be able to, fundamentally manipulate such activities.
It is quite possible (if not likely): MOST HUMAN THOUGHT, realistically OR naturalistically considered, IS THEREFORE IS NOT THUS CONSIDERED (at all, or at all realistically) IN THE "LAB". (Thus, the existence of the homuculus (or humuculi) of executive control and all the "meta" this-es or "meta" thats -- NEITHER strange type of concept NECESSARY IN ETHOGRAM THEORY.)
This IS NOT A LIMITATION OF SCIENCE or OBSERVATION, but a limitation of the lab and of typical experimental psychology.
Based on testable particular hypotheses from Ethogram Theory:
I should add that [still], based on the nature of the Memories, at least THE INCEPTION of each new qualitatively different level/stage of cognition would occur at some KEY times and "places" "locally" in circumstances, i.e. could be seen within the time/space frame of the lab: AS DIRECTLY OBSERVABLE OVERT BEHAVIOR PATTERNS -- and these discoveries, by using new sophisticated eye tracking (and, perhaps, computer-assisted analysis) technologies (<-- these basically being our "microscope"). BUT, you would have to know what to look for in what sort of settings _AND_ (at the same time) be able to recognize the KEY junctures in ontogeny and the development of learnings that THESE shifts (starting as very basic and essential "perceptual shifts"; then becoming perceptual/attentional shifts) WOULD OCCUR.
I am planning to do eye tracking studies involving multiple HMI screens? Are there any wearable eye tracking devices which are available in market? I am interested in devices somewhat similar to Tobii Pro glasses II but much more affordable than these.
Thanks.
I have been using ASL H6 eye tracker to measure pupil diameter. However, data from pupil diameter are expressed in pixels. It would be easier to have it in millimiters so as to compare data with related works. Thank you.
Hello,
I am in my final undergraduate year and I want to propose an experiment using the eye-tracking design. I plan to look for the relationship between autistic and anxiety traits in face emotion recognition task. Thank you in advance!
Traditional and non-immersive computerised methods incorporate the employment of static and simple stimuli within a highly sterilized environment, while the ecologically valid tests which are currently in use do not adequately reflect the complexity of real-life situations. In contrast, immersive virtual reality (VR) enables the implementation of dynamic stimuli and interactions within a realistic environment, offers a high degree of control over the environment and the procedures involved. However, there is a scarcity of implementing immersive VR in conjunction with established neuroscientific tools such as neuroimaging tools (e.g., magnetoencephalography, electroencephalography, transcranial magnetic stimulation, and functional near-infrared spectroscopy) and physiological measurements (e.g., thermal camera, galvanic skin response, electromyography, and eye-tracking). I would like to request from researchers, who have already implemented any of the aforementioned neuroscientific tools in conjunction with immersive VR, to share their insights regarding their advantages and disadvantages of using these tools in immersive VR research paradigms.
Hi,
I recently conducted an experiment using an eye-tracking device (tobii) and its extension for E-prime to verify the impact of visual distractions on a mathematical task; I previously built a virtual classroom environment in which during the presentation of some equations on a blackboard, distractions appeared in some other areas of the screen. I recorded participant's eye movements to detect whether the distractions were actually effective. E-prime generated .gaze files that i open in excel for interpretation.
Now, in my files I have the X,Y coordinates of the eye gazes of each participant recorded every 2/300 milliseconds.
My problem is that I would like to compare the coordinates from my .gaze files with the coordinates of the objects appearing on the screen to determine whether the gaze actually fell in the area of the distraction or not, and therefore determine if the recorded eye movements could be significant data to answer my research questions. I know this would have been easy by defining AOIs previous to the experiment but unfortunately i did not.
So, does anyone know how to transform the coordinates from the gaze files into coordinates on the screen so that i can determine whether participants were actually looking at the distractions?
Thank you in advance for you time.
Regards,
Sebastiano

In our project
we are using eye trackers and cameras.
Conference Paper Facial Expression Recognition based on LBP and HOG Features ...
An eye tracker consists of camera, algorithms and a computing node. A device or computer equipped with an eye tracker “knows” what a user is looking at. Algorithms calculate eyes' position and gaze points.
What are the state of art algorithms, tools and approaches used for the eye tracker solutions?
I am looking for any recent review paper on the effects of forward and backward pattern masking on saccades with suggested neural pathways and models explaining them. I am relatively new to vision science and it would be a big help if someone can point me to a good review paper to begin with.
Isn't grounding all interactions (& our understanding of particular interaction) best done by better understanding the Memories AS (being) EXPERIENCE ITSELF? I see this as one of the 2 consistent common groundings for properly coming to an understanding of concepts we come to have as a being, and this includes the development of not just bare simple concepts, but even the development of contingent SETS of such concepts, AND it includes that which come of the developed and developing Memories which allows for abstract thinking -- abstract concepts and abstract processing. Let me elaborate on this first type of thing:
First, realize: By the definitions of the Memories (our basic types of memory, all rather well defined by EXISTING research already), there is no way not to see EXPERIENCE as the operation of the Memories themselves (and THAT is EXPERIENCE ITSELF, literally true BY THE DEFINITIONS in modern perspectives and research). AND, CONCEPTS MUST BE ALL BASED ON THIS. Thus as experiences "grow" and as application of our concepts (defined by interaction with environments: social and/or otherwise, linguistic and/or otherwise) become (to the extent that they can) more widely seen as relevant and applied, this simply occurs by way of the simple forms of associative learning (the definition of such FORMS something that can be well agreed on); NOTE: All this eventually will only suffice WITH the second set of required groundings "emerging" for prompting MAJOR developments in ontogeny (see below) -- those influencing attention and learnings A LOT. Yet simple associative learnings seem to partly work (for a lot of the more bit-by-bit development) given evidence OF the existence of concepts/representations/ways-of-looking in the first place (such as its there, at least at later levels of child development). _AND_ these very simple associative learnings are ALL that would needed at the major points in development, in addition to the base perceptual/attentional shifts (described below). In a sense, yet still, they will be THEN AND THERE all that's needed -- those simple learnings STILL being ALL of what's necessary to "put things together" even WHEN THE SECOND SET/TYPE OF MAJOR FACTOR IS FOUND AND SEEN (and as and when such shifts are occurring). Yet, so far (i.e. the above) would not provide a complete picture of human learning and development . AT BEST, the Memories as they are at any point and associative learnings are still just "half" the picture (as already has been indicated). BUT: What's the other "half", at least more specifically/functionally? :
These other major necessary factors are basically the capacities (or capacities within capacities, if you like) developing with very subtle innate guidances (which are not-unlikely and certainly possibly, at least for a time, quite situation-dependent); these, of course, leading to some of the most major developments of the Memories and HERE, of qualitatively new learnings (still combining with the "THE knowns" and with each other JUST THROUGH THE SIMPLE ASSOCIATIVE LEARNINGS). These innate guidances are at first just sensing more: THAT OF _THAT_ which is _THERE _IN_ any given concretely definable situation (where more adaptation is needed). This is reliant upon and given also the way our Memories have already developed (given our past learning, and earlier innate guidances, the products of which have become well-applied and consolidated (etc.) and all which yields "the time(s)" for some new types of learning) . And now (from the good processing and consolidation ; and discriminations here, perhaps just associative learning as dis-associations) giving us, in a sense, a new or greater capacity in working memory (through more efficient "chunks" and/or some situations-specific "trimming" of the old chunks, and both WITH CHANGES IN OUR _WAY_ OF CHUNKING (and realize: this may not preclude other adaptive reasons for an adaptive increase in the effective capacity of working memory (WM)). The details of the nature of the periodic innate guidances:
What is newly, or at least now truly sensed, sensed as "the-more": that is sensed (and at least glanced at, if not gazed-upon) in a situation or situations, will lead to new perception of at least something more in the scope of "what's there". This will rather quickly go to perceiving more and then to perceptual/attentional shifts (applying some of our past-developed categories and processing to the new "material" -- AND at such also-adaptive points offering more "material" to refine or moderate one's responses/interactions). Here, there will be more in WM , and thus provide more that can be "associated-with" via the simple forms of associative learnings (now, with some new content: new parts and likely new wholes). These developments might be quite situations-specific at least at first, but they may develop to be concepts of rather great scope -- observations and other research which may well be possible are the ONLY things that will clarify all this. All we can say is that this will be some sort of BASIC KEY species-typical cognitive developments (with their inceptions, as indicated) during ontogeny [(birth to 18 yr. old, minimally 5 MAJOR hierarchical levels or stages are historically seen (but with several modern theorists hypothesizing phases within each level); all this can be seen in the overviews of great classic theories, still the most prominent in textbooks of General and Developmental Psychology)]. This very outline of this sort of process has NO limits (except human limits) and it includes the abilities to know, have, and use abstractions, INCLUDING contingent abstractions (holding true in just only some sets of apparently similar circumstances; AND, eventually, with ontogeny and the development of sufficient abstract abilities, ALSO enabling the ability to think and classify across previously differently-seen [(i.e. seen as different)] circumstances -- putting such complexes together in a concept -- this sort of thing including the most sophisticated abstract concepts and processing there is) : in some ultimate ("final", "rock bottom") analysis this all is possible because of demonstrable development and changes in the Memories, WHICH CAN BE RESEARCHED (as other characteristic of the Memories HAVE BEEN researched to date); AND the inceptions of new MAJOR LEVELS (those being with the "perceptual shifts" ... ) can also be directly observed and researched, using the new eye tracking technology (and ancillary technologies) -- and this will greatly guide one to fruitful research on the Memories.
The reasons, likelihood, justifications, better assumptions involved in having this viewpoint and understanding, AND the qualitative changes that which are developed this way (basically starting with key, adaptive "perceptual shifts") is what I spend much of my 800 pages of writing on: 200 pages, written some decades ago, and some 600 pages, written just in the last three years -- a lot of this latter being the job I did not finish back in the late '80s (and I really had no reason to pursue until the development of new technologies, esp. eye tracking and related technologies, came into existence to allow for testing my hypotheses). I also have take great pains in these latter writings to contrast this perspective and approach as thoroughly and completely as I could with the status quo perspectives and approaches in General Psychology and Developmental Psychology . And, to show all the ways this [what I have dubbed] Ethogram Theory is better in so many, many ways, including in its basic foundations, clearly more empirical (as directly as possible) than any perspective and approach heretofore.
I both show in details what is wrong with the "old" and much more likely correct and useful -- and more than plausible (and Biologically consistent and plausible) -- through this new general view. (Again, I provide related testable hypotheses -- verifiable/falsifiable.)
You will be able to see this new approach as better empirically than any other. Related to this: the great benefit that the FIELD of study is ALL clearly and firmly based (grounded/founded) on just 2 "things": (1) directly observable KEY overt phenomena (behavior PATTERNS, here in Psychology ) and (2) on certain clear directly observable and present aspects of circumstances/situations (aka "the environment) active in KEY past developments and/or present now. This is simply the return to the original and intended definition of Psychology _AND_, frankly, is THE ONLY WAY TO BE BEST-EMPIRICAL. (Think about it: NO MISSING CONNECTIONS.)
READ:
and
and
(see the Project Log of this Project to see many important Updates)
ALSO (not among the 200 pages of major papers and 512 pages of essays in my "BOOK", you already have been directed to) the following link gets you to 100 more pages of worthwhile essays composed after the 512 pages:
https://www.researchgate.net/publication/331907621_paradigmShiftFinalpdf
Sincerely, with respect,
Brad Jesness
Dear researchers, while working on the SMI experimental suite 360° eye tracker device, the software gives an error of iViewX registration failure, while other software like BeGaze and Experimental suit works well. It would be of great help if anyone of you had faced the same issues and can assist me with any probes.
Can anyone help me with finding EyeCrowd dataset? The following link is no longer working.
Eye Fixations in Crowd (EyeCrowd) data set (http://www.ece.nus.edu.sg/stfpage/eleqiz/crowd.html)
Please point me out to any new link available for this dataset.
Thanks!
Dear Esteemed Researchers,
For a research purpose, I need eye-tracker glass to monitor the eye movement. I have found two such Glasses "Tobii Pro Glasses" and "Pupil Labs". They are multi-functioned and expensive. I need to track the eye movement only during driving & walking, and I am looking at low cost. Would you mind to suggest me any eye-tracker at reasonable cost?
Thanks in advance.
Dear all
I am currently conducting an experiment that would require whole-brain EEG, eye-tracking system, and biofeedback signals recording.
Besides, participants in this experiment would be asked to conduct a task. Hence, I am wondering if there is a way for me to synchronize the three systems I used, since the precise time would be crucial for the following data analysis.
The equipment I planned to use are EEG (32 channels, Neuroscan), Tobii eye glass ( or eyelink2000), and Procomp Infiniti.
The program I used to present stimuli is E-prime.
I recently read some articles that would report how precise their time recordings are (ex: the time lag between different equipment is within 20 ms). I am wondering if there is a way for measuring the value and to synchronize the equipment. Thanks!
I'll start by repeating the title, above: What psychologists have not yet realized is that eye-tracking technology, etc. ALLOWS FOR AN _OVERALL_ MORE EMPIRICAL APPROACH !!
The new technologies are not just a tool for the "empiricism" they already do!
I have described and formalized into concrete, now-testable hypotheses that which would establish the most empirical grounding for "abstract" concepts. More empirically grounded and founded than anything heretofore, without a doubt -- and the view/approach is biologically LIKELY and this approach to research (on some new CONTENT it is good for) has not yet been tried. It involves "believing" nothing (actually believing MUCH less "on faith"); it really involves simply more empiricism, more direct observation [ specifically: discovering the DIRECTLY OBSERVABLE OVERT behavioral foundations for the qualitatively different levels/stages of cognitive development -- and HOW __LEARNING__ ITSELF (presently often ill-defined) CHANGES WITH THIS NEWLY OBSERVABLE PHENOMENON, and the consequences, ALSO ].
I have tried to clearly outline (including ending with most-empirical and testable hypotheses): the inception of abstract concepts with "perceptual shifts" (and thus providing them a concrete in-the-world foundation).
Again, the theory has to do with "perceptual shifts", NOW -- presently (at this point in history) -- able to be SEEN with new technologies: SEEING what subtle overt behaviors likely occur at the inception of each higher level of thinking during ontogeny. The outlook and approach is a cognitive-developmental theory -- i.e. of human child development -- and for finding of more major parts of who we all are).
You might well want to take a look:
The perspective and approach especially and specifically has to do with: perception and quickly/soon after that: attentional and gazing changes which likely occur at the inception of new qualitative cognitive developments (with ontogeny) (and literally, sensibly, set them off).
The following theory, with its most-empirical and testable hypotheses, indicates (clearly, with as good and totally empirical "guesses" as are now possible) the nature of these perceptual/attentional shifts accompanying (actually: "starting off") major qualitative changes in cognition:
Here it is :
Minimally, read both of the major writings:
Not only
BUT ALSO the much, much more recent:
(these much later, recent essays filling in some of the aspects of the treatise not originally provided, as stated directly in "A Human Ethogram ... " itself).
This theory does a LOT else correctly (unlike other theories) in abiding by necessarily applicable principles and seriously trying to have a perspective and approach which has ALL the features and dimensions a science theory should have . It is parsimonious. It uses the well-developed vocabulary of CLASSIC ethology (not the bastardized 'ethology' of today).
Psychologists may ignore this, but that would be just ignoring a most-empirical way of study (and ignoring some of the most-empirical, most-testable hypotheses). In short, it is scientifically WRONGFUL for psychologists to ignore this.
P.S. ALSO: Because all of this is so much more concrete, this theory of development and changes in learning should be THE theory of most interest to those trying general artificial intelligence.
I am thinking of Psychology researchers and theorists. Is it their duty to science to investigate the possibilities of important new tools and possible discoveries that involve empiricism at its best: attempting direct observation of possible/likely important overt behaviors, heretofore not seen?
For example, IN PARTICULAR:
Hi everyone,
I was just wondering if anyone was interested in eye-tracking as a method in science education research being a Special interest group in ESERA.
There are several conditions we need to meet in order to be acknowledged and supported, nevertheless, the idea seems appealing.
Please, let me know in case you'd consider joining us.
Best for now,
Martin Rusek
Would it be theoretically possible to take eye tracking (gaze position) data and feed it into some kind of wearable device/virtual reality setup that would cause the whole body or parts of the body to feel as if they are moving in a direction that's counter to how the eyes are moving? If so, what kind of technology would be the best way to do this?
Models and [ non-concrete * ] Mechanisms: Don't they seem to have the same problems with respect to actual phenomenology and what is real?
Maybe they are temporarily necessary, but should be avoided and should be bettered (AND REPLACED) as good research progresses. If this betterment does not happen, you are not doing at least some of the essential research (likely observational). PERIOD.
Isn't it possible that the best understanding is just the knowledge of, and understanding of, SEQUENCES? (Of course these can be "made sense" of, within the "whole picture", i.e. the greater overall understanding -- and there is "purpose" or direction to each behavior pattern [in the sequences].)
{ ALL this increases the key role (and sufficiency) of all the simple [ basically known ] sorts of associative learning ALONG WITH OUR SEVERAL SORTS OF MEMORIES. "Outside" of innate guidance WITH PERCEPTION/ATTENTION (including innate guidance in later stages/periods of development, with behavioral ontogeny) (and this innate guidance being WITH the simple learnings and Memories) AND their consequences with behavior patterns: the well-understood simple learnings may ultimately provide "the 'glue' for 'the whole story'" , otherwise -- i.e. other than the key "driven" directly observable sequences **.
AND NOTE: NO need whatsoever for special sorts of theorist/researcher-defined types of learning, e.g. "social learning", etc.. NO need for ANY of the "metas", presently a major homunculus.
This perspective "conveniently" has the advantage of be conceptualizable and is able to be clearly communicated -- requirements of ANY good science. It is within our abilities (as adults, at least at particular times) to actually 'see', i.e. to have and to provide REAL UNDERSTANDINGS. In my view, the other "choices" seem not to have these distinct characteristics (so, the perspective above is either true OR we all may well be "screwed").
* FOOTNOTE: "Concrete" meaning: with clear, completely observable correspondents; AND, likewise for models, with any promise (of progress and replacement).
** FOOTNOTE: "Directly observable" meaning: can be seen (and agreed upon AS SEEN) IN OVERT BEHAVIOR PATTERNS (AT LEAST AT KEY TIMES, e.g. with the inception of new significant behavior patterns).
--------------------------
P.S. This (above essay) may seem "self-serving", since I have a theory putting all of the positions/views above TOGETHER cogently and with clear testable/verifiable(refutable) HYPOTHESES (using modern technologies, eye-tracking and computer-assisted analysis). See:
See especially:
https://www.researchgate.net/publication/286920820_A_Human_Ethogram_Its_Scientific_Acceptability_and_Importance_now_NEW_because_new_technology_allows_investigation_of_the_hypotheses_an_early_MUST_READ
and
https://www.researchgate.net/publication/322818578_NOW_the_nearly_complete_collection_of_essays_RIGHT_HERE_BUT_STILL_ALSO_SEE_THE_Comments_1_for_a_copy_of_some_important_more_recent_posts_not_in_the_Collection_include_reading_the_2_Replies_to_the_Comm
AND
the Comments to (under) the second-to the-newest Update on the Project page: https://www.researchgate.net/project/Human-Ethology-and-Development-Ethogram-Theory (for EVERYTHING)
Hello,
I'd like to associate physiological (eye-tracking) with behavioural measures (reaction times) in my experiment. I'm building my protocol on tobii's software but i don't find any possibility to record reaction times.
Do you know a potential add-on which could make this ?
Dear Researchers,
What do you think about using eye-tracking technology to investigate learners' cultural differences like religion, race, language, ethnicity or social class? Any available studies on this topic?
Best regards,
EL HADDIOUI
Our group is considering to buy an eyetracker from the Tobii company ( e.g.Tobii Pro X3-120) for studies on healthy subjects and anorexic patientes, most of them underage. We have few experience with eyetracker models. To us, it seems that Tobii provides a more modern inbuilt software environment than Imotion (> set-up of experiments), but we are wondering whether Tobii also provides suitable data accuracy for scientific purposes. We are happy about any experience reports!
Hi guys, are there any general metrics for measuring attention via Eye tracking? I mean something like - if you got long fixations (above 200ms), there is fewer of them and you got longer saccadic movements - does it mean that your attention on task is well developt? Is there any rule of this kind, that i can use for different tasks in general?
Thanks a lot
I have heard people suggesting me to track the dominant eye when doing single eye eye-tracking, because the "dominant eye will give more accurate data". However in normal subjects the fixation locations of two eyes should be quite consistent. Is the data quality significantly different between dominant eye-tracking and non-dominant eye-tracking?
I am still of the mind that it is possible to have a science of Psychology where the only things studied are behavior patterns and associated environmental aspects. AND: Key to this is finding and having some most-significant, pivotal, foundational BEHAVIOR PATTERNS (DIRECTLY OBSERVABLE OVERT BEHAVIORAL PATTERNS) -- ones which can be seen at least at key times and, at least, at the INCEPTION of any significant new behavior patterns involved in major shifts in cognition and cognitive development. [ (THEN, otherwise, behavior is credibly just altered by simple, relatively easy-to-understand processes -- in particular, the various sorts of associative learning.) ]
My perspective and approach describes in great detail how this can be the case and the major necessary hypotheses are directly testable (verifiable), being verified by finding major yet-to-be-discovered DIRECTLY OBSERVABLE OVERT BEHAVIORAL PATTERNS (when you know how and when to look to find them). These major behavior patterns involve Memories-contextualized "perceptual shifts", with subtle but the clear overt behavior patternings as their aspects -- these, along with environmental aspects, BEING ESSENTIAL PROXIMATE CAUSES of behavior pattern change (not only with the new behavior patterning, but those also importantly at-times affecting already-existing behavior patterns). The major NEW inventions that allow for researching this, and having these phenomenon discovered, are the new eye-tracking technology (and computer-assisted analysis).
This is the way (not yet tried) to keep Psychology as "the science of behavior" [(the "behaviors" of the various sorts seen as important at one time in the history of Psychology or another and, NOW, ALL BEING "admitted" and seen as aspects of behavior)]. Of course the other (ONLY other) key things involved being the "triggering" (or key facilitating) ENVIRONMENTAL ASPECTS.
Has this definition of Psychology as "the science of behavior" been abandoned or corrupted [ with models by-analogy (e.g with information processing as could be done by a machine); OR phenomenon of uncertain relation to actual most-important behavior (e.g. crude neuroscience); OR by using instead elaborate speculative conceptualizations, which could NEVER have any direct evidence supporting them (e.g. "embodiment" 'theories') ] ? I say: "YES. PSYCHOLOGY, THE SCIENCE OF BEHAVIOR, has been abandoned and corrupted in at least these three ways."
BUT now, with a new perspective and with new ways to detect more subtle behavior patterns, we now CAN RETURN to the classic kind of definition Psychology has had over many decades (with the focus on "behaviors"/environmental factors thought to suffice). My perspective and approach ACTUALIZES this, and in the process eliminates any nature/nurture controversies BY BEING NOT ONLY PSYCHOLOGY IN THE CLASSIC SENSE BUT, at the same time, being the BIOLOGY OF BEHAVIOR -- the biological structure and nature seen in just behavior patterns THEMSELVES.
My "biology of behavior" project :
See especially:
and
I would like to know source of data-sets where I can download data easily without any obstacle.
Re: cognitive-developmental psychology: Is it a bad sign if one has only done ONE thing in her/his entire lifetime?
This is basically, in part, a confession. If you knew how true the "one thing" was in my life, you would likely consider me lazy and privileged. I can accept both labels and can clearly see it that way (at least from the standpoint of some very good people). Moreover, I have had the ability to have anything and everything I thought I needed -- essentially at all times.
But, perhaps as is the only interpretation imaginable, you suspect I am making such admissions just to further the exposure of my perspective and approach. That is completely true. And, I do contend that (with having all resources), I lived virtually all the years of my life looking for a complete and the best thoroughly empirical perspective. Even in my decades of college teaching (more like 1.5 decades), my courses and presentations had coherence most certainly as a function of my views. THUS, indeed, in fact: I have never done anything else in my life other than that needed to produce the papers, book, essays, etc. that I present here on RG (or make readily available through RG). To have a picture of my life, one should imagine about 30 years of it operating much as a hermit (for all that can be good for -- and I do believe it can be good for something).
I started with a core and moved carefully in adopting any aspect of my perspective (basically starting from the position of just what is possibly at-the-very-least needed, and maintaining extreme parsimony). And, again, I am a most thorough-going empiricist, believing that EVERYTHING has a core foundation of some behavior which, at least at some key point, is both overt (though maybe quite subtle) AND directly observable (and now practically so, via eye-tracking). My entire perspective and approach relies pivotally and mainly on such foundations and otherwise only on the best findings and extremely widely-affirmed processes IN ALL OF PSYCHOLOGY (things showing the very best inter-observer agreement). All this is not any kind of abstract or wide set of things. The other prime objective ("directive") has been to NOT [just] link but PUT behavior (behavior patterns) clearly IN a biological framework -- showing as much as possible the "biology of behavior"; this had the rewarding result of eliminating critical and serious dualisms, esp. nature/nurture.
Assumptions or presumptions (pseudo-asssumptions) in Psychology had to be exposed as both unproven and not well-founded. A half dozen central "assumptions" have been replaced in my system BY BASICALLY THE OPPOSITES -- these assumptions being fully consistent with biological principles and more likely true. I also show in my work how to use all the terms of classical ethology, this also allowing or furthering the "biology of behavior".
In short, though this should be to some degree a shameful confession (and many would have to believe that is part of it), my work is MINE (compromising nothing; adhering to principles) -- and it is good **. Please take some time to explore it, starting at: https://www.researchgate.net/profile/Brad_Jesness2 Thank you.
** FOOTNOTE: The perspective and approach is explicit and clear enough for artificial intelligence also -- a good test. BUT: For the great advancements needed in Psychology and major practical utility in AI, we need DISCOVERIES, the nature of which are indicated in testable (verifiable) hypotheses, clear in my writings -- MUCH awaits those discoveries. The same discoveries are involved for either field.
P.S. For 20 years of my hermitage I did have the strong "hobby" (avocation) of JavaScript programming; I never made any money from this. I tell you this just to make sure the portrayal is accurate -- and to in no way mislead. (See http://mynichecomputing.org , if you are curious.)
Hey everybody,
I´m planning an Experiment using SMI RED 500 Eye-Tracker. In the Manual it is said, that this remote eye-tracking System automatically controls for head movements, so I wondered whether or not the pupil size output has to be corrected for/by the respective gaze positions?
Does anyone has some experience with this?
Kind regards
Kilian
There have been several learning theorists now that speak of non-associative influences on learning. Here are some quotes from a few:
(My important Comments follow the quotes, below.)
QUOTES From "Three Ways That Non-associative Knowledge May Affect Associative Learning Processes" by Thorwart abd Livesly:
"While Mitchell et al. (2012) favored an explanation purely based on conscious reasoning processes, where participants deliberately attend to the cues they believe are important, a viable alternative is that attentional processes are brought under conscious control and thus let non-associative knowledge influence the course of subsequent learning."
"In some circumstances, associative activation of the outcome may form the strongest available evidence about what is going to happen when a cue is presented, or the strongest indicator of how the individual should behave. But under other circumstances, for instance where it is very clear that a deductive reasoning process should be used, associative memory retrieval may play a relatively minor role "
"a viable alternative is that attentional processes are brought under conscious control and thus let non-associative knowledge influence the course of subsequent learning. This source of influence does not necessitate that non- associative expectations fundamentally change the operations of the associative network itself, merely what it receives"
"In addition, if non-associative knowledge can affect the way stimuli are represented then this knowledge may also change the manner in which associative retrieval generalizes from A to AB"
---------------------------
QUOTES From Mackintosh Lecture: Association and Cognition: Two Processes, One System. I.P.L. McLaren et al:
" ... does not shy away from placing associative processes at the very centre of our dual process account, and postulates that propositional processing is built upon associative foundations"
"... we are propositional entities constructed from an associative substrate."
----
QUOTE From
Moving Beyond the Distinction Between Concrete and Abstract Concepts Barsalou et al:
"Conversely, when people generate features of abstract-LIT concepts, they typically generate external elements of the situations to which they apply. "
-----------------------
My IMPORTANT COMMENTS:
Problem for these theorists/researchers is that their "new propositions", "non-associative factors" and "new generalizations" ARE INTRACTABLE. Such phenomenon seem to be inferable, indeed, but they do not have a way to find the source (any empirical grounding). Thus, these theories at present have no empirical referents at major points to "get to go where they want to go".
Well, I actually address the same things: in EFFECT providing for new propositions (used in deductions), new generalizations, and what appear to be non-associative factors. BUT, my theory sees the origin of these effects IN QUALITATIVELY DIFFERENT cognitive stages, and due to "perceptual shifts". BUT, here is the REALLY GOOD NEWS: I indicate an empirical way to discover the "perceptual shifts", using new eye-tracking technology and computer-assisted analysis. I describe what to look for in enough detail to do the eye-tracking studies, during ontogeny -- at key points. Thus, my theory, which provides for the same kind of shifts in learning HAS TESTABLE HYPOTHESES. If the hypotheses of my ethogram theory are verified (and they can be is correct), we will at least have found the concrete directly observable overt behavior patterns associated WITH THE INCEPTION of that which yields the new abilities/phenomenon.
One other thing: Because the proximate cause (outside environmental factors and contextualization from the Memories -- which both can be seen as the other simultaneous proximate causes) IS "perceptual shifts" then nothing is divorced from ASSOCIATIVE LEARNING. This is also the end of the nature/nurture false dualisms. All still involves associative learning -- and no strange "non-associative" stuff.
See:
and
Also See:
We are about to transfer our research on EEG feedback training for athletes to the area of truck drivers. We would like to know if there is research in this area which we could access. Also, it would be interesting to know if other measures ahve been used in this area like e.g., HRV or eye-tracking.
Hi, we are recently doing a working memory experiment in healthy, normal subjects with EEG and eye-tracking measurement. We have no prior background in psychology, but we have heard that some psychology lab let the subjects do some questionnaires before the actual experiment.
We wish to know is it a standard practice to do questionnaire before actual experiment in human neuroscience research? If yes, how can I find appropriate questionnaire? If no, then what kind of experiment need a questionnaire before hand?
As a newbie and low-budget neuroscience researcher, i'm looking for webcam based eye tracking softwares, which can measure ''duration of initial fixation'', ''total fixation duration'' and ''area of interest'' accurate and precisely.
I found gazerecorder, and somekind of out of date project;
is anyone recommend me, or used webcam tracers in their clinical research?
thank you for your answers,
Dr. Mirac
I can assure you my way is empirical and all major hypotheses are directly testable (via direct observation of overt behavior patterns). It is a viable approach, with all testable hypotheses, and with explicit, well-founded and biologically-consistent assumptions behind it all. Eye-tracking technology will be needed and perhaps computer-assisted analysis. FIRST, See:
then you must see the recent LARGE Collection of Essays explicating and fully justifying my approach and clearly indicating the positive consequences and ramifications : HERE'S the BOOK:
* PLUS * : YOU MUST SEE THE COMMENT _AND_ THE 2 REPLIES TO THAT COMMENT (below the BOOK's shown text), to have all the needed specifics.
EYE-TRACKERS: If you do not want to read as much as I ask people to do above, you should be able to get a pretty good idea of what would be involved and if you could do it by just reading COMMENT _AND_ THE 2 REPLIES TO THAT COMMENT on the same page as the BOOK. (This is less than 10 pages.)
--> Can modern eye-trackers do what I clearly indicate needs to be done? <--
Is the following list the characteristics of the things which are the bases of psychological understandings for General Artificial Intelligence?
The material, below, from https://www.researchgate.net/project/Developing-a-Usable-Empirically-Based-Outline-of-Human-Behavior-for-FULL-Artificial-Intelligence-and-for-Psychology "Project Goals (for General Artificial Intelligence and psychological science)" (below, slightly elaborated). (Also, this Project is where you can find additional information and "specs".)
Project Goals (for General Artificial Intelligence and psychological science)
Project strives to be:
* nothing more than needed, while WELL-ESTABLISHED, BEING ALWAYS clearly-related to the most reliable, strongest scientific findings in psychology (this is, in particular: facts and findings on the Memories)
* enough to embrace a good part of everything, providing a very likely main overall "container" -- with EVERYTHING addressed, founded on, grounded on, OR clearly "stemming" from: discovery of and direct observation of overt behavior patterns (done by providing clear and likely ways to discover the specific, direct, explicit, observable empirical foundations to qualitative cognitive stages -- something completely lacking in modern psychology otherwise). All hypotheses related to all positions (in THIS LIST and in any References) ARE testable/verifiable (at least now, with eye-tracking technologies and computer assisted analysis).
* having ALL that is needed AND which is all-concrete (explicit, specified, or FULLY defined-as-used or thusly definable), at the same time: so as to provide for Generalized Artificial Intelligence and good science, otherwise. [ There may be one seeming exception to elements being "clearly specified" : the "episodic buffer". And that can be defined "relationally", simply having a state plausibly/possibly inferred from all the [other] more concretely defined elements (with their characteristics and processes).]
* providing for self-correction and for continuous progress as science (actual psychology) (as real and good science, and good thinking, is) And, not coincidentally, providing for continuous development of the AI "robot" itself (by itself; of course: experience needed).
* consistent with current major theories to the full extent justified, but contrasted by having a better well-established set of assumptions, thoroughly justified and explicated. An integrative perspective, equally good for appropriate shifts in all theoretical perspectives (in the end, each theory allowing MORE, and being more empirical)
* proving (by amassing related evidence of) the inadequacy of current perspectives on and approaches to behavioral studies (addressing current psychology-wide pseudo-'assumptions')
* an approach which ends obviously senseless dualisms, e.g. nature/nurture; continuous/discontinuous, which just impede understanding, discovery, and progress. This is inherent in the "definitions" of elements and processes (all from observations or most-excellent research; and largely inductively inferred) .
It is good for psychology (it IS psychology) and General Artificial Intelligence, as well.
NOTE: (1) Nothing above should be seen as merely descriptive (this implies too much tied to certain situation(s) and/or to abstraction(s), always lacking true details; it also probably implies too much related to human judgment).
(2) Nothing -- no element or constellation of elements -- are operationally (as they actually come together and 'work') as envisioned only by, or in any way (at all) mainly by, human conceptualization OR human imagination.
(3) The Subject is ALL and shall be seen just as it is (at least eventually), and should always be THE guide phenomenologically at all times to move toward that goal.
I believe this is the only way our algorithms will correspond to biology and that AI will really simulate US.
[ P.S. I have tried to much more specifically direct people to answers to Questions such as above, FOR BEHAVIORAL SCIENCES in general, in my major papers here on RG (esp. "A Human Ethogram ... ") AND in my many, many essays, now most in a 328-page BOOK, Collected Essays (also on RG). General Artificial Intelligence is, in effect, a behavioral science itself. ]
Hi all,
I’m interested in finding out whether the preferences for visual cues (i.e. Y & Z) differ between two groups of subjects (i.e. A and B) across 5 different scenes (1,2,3,4,5). Each subject views each scene 2 times (yielding a total of 10 trials), and was supposed to provide a verbal response when they have reached a decision (i.e. trial duration was not standardised). Within each trial, each subject may view each visual cue more than once. In total, there were N = 760 fixations. Also, I have a small & symmetrical sample size of participants as well (14A & 12B)
After some reading, I think I’m supposed to use a Multilevel Logistic Regression. I’m aware of the violation of assumption on independence of observations – but this happens with eye tracking studies.
That said, I’m currently using SPSS. There doesn’t seem to be a button or place for me to account for the repeated measures element within the analysis GUI.
(1) Can I use a Generalised Estimating Equation (GEE)? If no, why?
(2) How should I organise my data so I can account for the within trial and within person measurements ? Should I just add another column Trial No.?
Currently, I’ve organised my data in long format with the following
(1) Dependent Variable = Visual Cues (Y & Z)
(2) Predictor Variables
(1) Participant Group (A & B),
(2) Scene (1,2,3,4 & 5)
If anyone can assist me in answering the above queries so I can ease into the analysis, that will be great!! Many thanks in advance!
Dear fellow eye-tracking-researchers,
In studies with multiple displays, a user potentially needs more freedom to move (e.g. come towards or away from the display or turn to the sides) compared to in a single display setup. In such cases, a sustainable (or uninterrupted) tracking of user's point of interest (calibrated data stream) is challenging for an eye tracking system with certain spatial limitations. Thus, either mobile eye tracking or a combination of real-time head tracking with eye tracking seems to be relevant solutions.
For multiple display setups, in your opinion
- Which "Head & Eye Tracker" combination would you suggest?
- Which mobile eye tracking system would you suggest?
- In which application area and how long you have tested this system?
Thank you & Best regards,
Kenan
The following link is a good place to start (and it provides links to other writings):
https://www.researchgate.net/post/A_Beginning_of_a_Human_Ethogram_seeing_the_inception_of_cognitive-developmental_stages_as_involving_a_couple_of_phases_of_non-conscious_perception
Hello, I am looking for inexpensive hardware options for fixation monitoring during a PC-based cognitive experiment. The area that participants should not look outside of is rather small (2 cm radius). I am new to eye-tracking, so would greatly appreciate some recommendations. For example, can this precision of gaze monitoring be achieved even with a mere webcam?
I would be interested in how much it depends on the eye-tracker or researchers' experience how many participants fall out of the sample because there are difficulties with the measurement (e.g. pupil cannot be pinpointed). I have read about numbers between 5% and 20% - there seem to be many differences. What are your experiences?
It's really a sad news that Eye tribe have stopped making low cost eye tracking devices. Any other similar alternatives? I want to conduct academic research with a movable eye tracking device which can be mounted on a laptop. Also I will require statistical data from the device for the analysis purposes. Thank you