Science topic

Visual Attention - Science topic

Explore the latest questions and answers in Visual Attention, and find Visual Attention experts.
Questions related to Visual Attention
  • asked a question related to Visual Attention
Question
1 answer
I'm doing a research proposal and want to compare bilingual people to monolingual people on a dot perspective task.
- The first IV  (IV1) will be language ability with two levels: monolingual (control) /bilingual
- The second IV (IV2) will ONLY be applied to the bilingual group: Ps are informed the avatar is bilingual or not (so two levels again, repeated measures with counterbalancing)
The DV is reaction times in the dot perspective task.
I am just wondering how I would go about analysing this? I was thinking an ANOVA, but as the control group are not exposed to IV2 do I just simply compare the means of all groups?
I want to compare
  1. Control group reaction times to BOTH levels of IV2 combined (overall RT for bilinguals)
  2. Control group reaction times to each level of IV2
  3. Level 1 vs level 2 of IV2 (whether avatar is said to be bilingual or not)
Is it best to split this study into 2 experiments or is it possible to keep it as one and analyse it as one?
Relevant answer
Answer
Hello,
You can use a mixed-design ANOVA with language ability as a between-subjects factor and the avatar's language ability as a within-subjects factor for the bilingual group only. Planned contrasts or post-hoc tests can compare the control group to bilinguals.
Hope this helps
  • asked a question related to Visual Attention
Question
5 answers
You can suggest some literature too
Relevant answer
Answer
Are you familiar with the Cognitive Theory of Multimedia Learning (CTML)? I know you asked about what factors affect learners' visual attention in the classroom, but I believe that some of the principles that apply to multimedia are also valid for the traditional school environment. CTML draws on Baddeley's model of working memory, Paivio's dual coding theory, and Sweller's theory of cognitive load. The main proponent of CTML, Mayer (2009) identified twelve principles for multimedia instruction, some of them are directly related to visual attention. I will leave a link for an interview with professor Mayer here for you to check if you are interested.
Cheers,
  • asked a question related to Visual Attention
Question
4 answers
I am looking for papers that provide explanation analytically as well as mathematically
Relevant answer
Answer
I hope the following paper are interested to you
1-Computer Vision-Based, Noncontacting Deformation Measurements in Mechanics: A Generational Transformation
Sutton, Michael A.
Journal:
Applied Mechanics Reviews
Year:
2013
2-Vision-based detection of loosened bolts using the Hough transform and support vector machines
Cha, Young-Jin, You, Kisung, Choi, Wooram
Journal:
Automation in Construction
Year:
2016
  • asked a question related to Visual Attention
Question
480 answers
Dear RG community, I open this pedagogical eThread to have a friendly, and sincere discussion on the need to teach visualization in pure sciences starting at the high school level.
The idea comes to mind after finishing a short MOOC course on the need for visualization when teaching mathematics at a basic level, offered at the Open University in the UK.
I asked myself this morning the following elementary questions:
  • "Do I really know how to vizualize a complex number z = x + i y?"
  • "Did I try it myself at least once in my life as a science teacher to vizualize it on my own?"
I hope, you find it enriching. Thanks in advance to all participants.
Using visualization in maths teaching CC licensed at:
Tools & channels:
"Dutchsinse" recommended by a friend, Dr. Stephen L.
Relevant answer
Answer
Yes, Prof.
Vadim S. Gorshkov
Visualization is a very extensive subject, not easy to cover or to define some limits on it. I am learning all this & I find it interesting, for example in Moodle, scientists develop a particular plugin for visualization in teaching and learning asynchronously, and that is a Ph.D. thesis that well deserves attention.
Best Regards.
  • asked a question related to Visual Attention
Question
5 answers
Looking for Cooperation : 3D Brain Activity & Visual Attention
I am looking for Research Cooperation on understanding Vision and Brain activity with a new ultrafast and high resolution ultrasound system (5 microns, 20 000 images/s) developped in France.
I am professor in brasilian federal university specialised in NeuroScience.
The project must buy this ultrasound system
Relevant answer
Answer
Bonsoir Emmanuel.
Merci pour votre réponse.
Quand serait il possible à votre avis de faire des mesures sur l´homme pour la recherche, œil et cerveau avec des micro bulles.
Olivier Baud <olivier.baud@unige.ch> a déjà fait des mesures chez les enfants prématurés lorsqu'il était l'hôpital Robert Debré et serait intéressé aussi pour les yeux.
Cordialement.
PS : De votre côté quels sont les objectifs de vos travaux ?
  • asked a question related to Visual Attention
Question
2 answers
Hi everyone,
In eye-movement tracking studies with babies, it is sometimes difficult to get a perfect calibration. I wonder if there are well-established criteria, thresholds or recommendations for excluding calibrations.
Any input - tutorials, method reviews or drawn from researchers' own experience would be very helpful.
Second, has anyone experienced slighted shifted calibrations (i.e. experimenter perceives that the baby is looking at the right target but the ET maps the eye-mvt with a shift, e.g. to the right direction probably due to issue with the initial calibration)? Are there ways to correct those or shall the participants' data be discarded?
Many thanks in advance for experienced input.
Relevant answer
Answer
Aude hi,
To my understanding, in many baby studies, you need to track gaze positions on real objects, e.g. toys on the table. In contrast, many eye-trackers calibrate and validate gaze positions on an orthogonal plane, such as a monitor surface. This can lead to intrinsic errors in baby's gaze tracking. In our stimulus presentation software, EventIDE (www.okazolab.com), we address it by offering a 3D perspective correction for the default eye-tracker calibration. Apart of it, the software includes a quick, semi-automatic, correction for slighted shifts (e.g. as result of head movement). If you like to see a demo, please write me here or to i.korjoukov@okazolab.com
  • asked a question related to Visual Attention
Question
6 answers
I am planning to conduct a study on commuting scenarios. For the study, I need to control the tiredness (fatigue), visual attention and crowdedness in order to simulate the actual commuting scenario (e.g. by bus). I have read some articles on Stroop effect, n-back test and Go/NoGo test to control the fatigue. But I don't know how to control the visual attention and crowdedness, I couldn't find more articles on this. Any suggestions?
Relevant answer
Answer
Given that you want to assess a real-life scenario, it might make sense to actually simulate one. You could look at virtual reality scenarios to place your participants on either a crowded or a half-empty bus, train or subway. In terms of visual attention, you could use two scenarios in which the other passengers are either quiet and non-engaging vs. a scenario where they are a bit more active and visually engaging for the participant to draw their visual attention.
That would leave a few questions:
- is such scenario already available? I'm not aware of any, so it might have to be created.
- how to implement traditional neuropsych tests in such scenario? Maybe look at publications by Thomas Parsons, he has done so with various real-world scenarios.
- do you have to use traditional neuropsych tests? Not really, as you could also implement more naturalistic (ecologically relevant) assessments of attention (or whatever you intend to measure). However, you might want to correlate whatever you measure in the scenario with traditional (validated) neuropsych measures since that's what this field still fixates on.
Let me know if you have any questions. Best of luck.
  • asked a question related to Visual Attention
Question
6 answers
Hi,
I am aware of the issues and pitfalls of an online behavioral testing - i.e. collecting data in a cognitive experimental (visual search) task throughout the internet. Yet, I decided to do a little test:
I want to assess the same visual search task in a controlled lab setting and using an online platform. I am not planning to do a repeated measures setting.
In the lab setting, around 30 participants will do (based on my previous experiments).
Here's the big question: If I want to validate the online method how many participants do I need online? Shall I go with the same sample size or should I aim for as many respondents as possible?
Thanks in advance!
Andras
Relevant answer
Answer
I agree. But checking for variations should be.
  • asked a question related to Visual Attention
Question
6 answers
I want to track participants' visual attention while interacting with a robot. Thus, I need a head mounted eye tracker or glasses. Could someone please tell me which system is best to use? Thank you.
Relevant answer
Answer
  • asked a question related to Visual Attention
Question
3 answers
What I am interested in is a sustained task setting where we can use top down (goal- driven) and bottom up (stimulus- driven) attention tasks independently to compare between them.
Relevant answer
Answer
Dear Sourav, Your question and intent creates a range of critical problems, which may make it extraordinarily difficult to separate bottom-up and top-down attention issues. As a starting point for further discussion of the issues, can I recommend that you download Volume 2 of my thesis from the following link: http://researchrepository.murdoch.edu.au/id/eprint/30198/
Read Sections A1.2.2 and A1.2.3. This should help to lay a good foundation. Once you have done that, please let me know and I will point you toward some important information related to aspects of neurological processing, including recent research on the implications of astrocyte processes for feelings and attention. I hope this will be of help.
Yours sincerely,
Bruce Hilliard
  • asked a question related to Visual Attention
Question
9 answers
Is there an actual proof that visual-spatial cues enhance early or late visual processing (as compared to uncued visual processing)?
Without saying that what is implied by this question is "true", we know that when it comes to response times (RT), peripheral (or exogenous) and central (or endogenous) cues will have a different impact (e.g., Doallo et al., 2004). However, I struggle to find any event-related potentials (ERP) study that demonstrates an enhancement of perceptual processes following a cue (preferably peripheral) when compared to "self-generated", or spontaneous, gazes (i.e., overt spatial attention).
For instance, say that you have to look out for forest fires all day long. You will probably end up doing something else to fight boredom, and hence end up looking for possible smoke from time to time.
Now the question is: Will you be able to report(RT) a smoke faster if you are spatially cued because the cue allowed you to perceive(ERP) it faster?
To summarize:
Endogenous Cue – Spontaneous = ?
Exogenous Cue – Endogenous Cue = ?
Exogenous Cue – Spontaneous = ?
Reference
Doallo, S., Lorenzo-Lopez, L., Vizoso, C., Holguı́n, S. R., Amenedo, E., Bara, S., & Cadaveira, F. (2004). The time course of the effects of central and peripheral cues on visual processing: an event-related potentials study. Clinical Neurophysiology, 115(1), 199-210.
Relevant answer
Answer
This is a great question because spatial cueing research is mainly about variations on a paradigm and it's important to stop and think again about what it all means. So, there is a big literature on ERP effects of spatial cueing, beginning (according to a quick search) with Eimer (1993). Many of these studies would however involve "self generated gazes" - or self-directed attention. For example, Nobre et al. (2000) performed an experiment in which the same (bicoloured, central) stimulus had two possible interpretations (i.e. target is probably right or probably left), according to prior instructions, and got early negative ERP enhancement contralateral to the cued hemifield.
However if we limit the question to exogenous cueing, a recent review by Slotnick (2017) concludes that early ERP effects in visual cortex (C1 component) are more likely to be observed for exogenous than endogenous cues, in upper visual fields, with distractors and with high attention load.
Presumably gamma enhancement and reaction time effects occur later than C1
Eimer, M. (1993) Spatial cueing, sensory gating and selective response preparation: an ERP study on visuo-spatial orienting Electroencephalography and Clinical Neurophysiology/ Evoked Potentials, 88 (5), pp. 408-420.
Nobre, A.C., Sebestyen, G.N., Miniussi, C. (2000) The dynamics of shifting visuospatial attention revealed by event-related potentials Neuropsychologia, 38 (7), pp. 964-974.
Scott D. Slotnick (2017) The experimental parameters that affect attentional modulation of the ERP C1 component, Cognitive Neuroscience, 9:1-2, 53-62, DOI: 10.1080/17588928.2017.1369021
  • asked a question related to Visual Attention
Question
2 answers
I'm looking for a simple MOT to assess selective visual attention in a group of video gamers. The test should be validated for this measure and free to use.
I'm essentially looking for something exactly like the link below, but that outputs raw data.
  • asked a question related to Visual Attention
Question
19 answers
One can measure almost everything as far as the vital functions are concern. E.G. by  plethotysmography we can measure the vascular tonus and ANS - CNS coupling. EEG- ECH coupling as mesyred by coherence can reveal how much appreciation we have to one another. By EEG and MEG we can mesure the many brain functions for example the visual attention.
Going to sensors in t-shirt we can estimate stroke volume and respiration rate. Is it enough to say that using oculogarphy we can say what is being attended to ?
Can we say that someone is capable to fly F-15 in a safe mode ?
Jerzy
PS. Flight safety is my profession.
If we save one live its like we had saved the whole world.  Flight safety ?
Relevant answer
Answer
The answer to the question you asked  is very simple. Yes  There will be high correlations between various variables and being a safe pilot.  . but that is not what you want  to know.   You asked the wrong question to get the answer you want. ; the question is
Is is possible using the sorts of data you mention to determine with certainty whether a particular person will be a safe pilot   ??
 The answer is very simple
 NO
There is too much variability.  There are to many interactions.  The situation when a pilot is tested in flight is never the same as when the data you are describing iare being  obtained.  
NO it is not possible to determine safety with certainty.   Anybody who thinks so the contrary is not thinking clearly.
\George Spaeth.  
  • asked a question related to Visual Attention
Question
3 answers
My collaborators and I are putting together a symposium for Psychonomics 2017 and have some findings that go against the grain (we don't find an attentional bias to select faces across several studies).  We are looking for other presenters for this symposium, so if you have data that is relevant that you'd like to present with us please email me directly - ebirming@sfu.ca
Relevant answer
Answer
We found attentional capture by inverted faces. I can share the outcomes of our new set of experiments. Best Nicolas Burra
  • asked a question related to Visual Attention
Question
5 answers
Thank you!
Tomaso
Relevant answer
Answer
Unfortunately, even if it can be used in GIS software to visualize the regions, it doesn't allow download of the data (you'd need WFS - Web Feature Services).
  • asked a question related to Visual Attention
Question
2 answers
Hi! We want to investigate eye movements in patients with neurodegenerative diseases in Russian cohort. To make it comparable we kindly ask you to share some of your basic study protocols (in Presentation or other soft)!
Relevant answer
Answer
Dear Manuel, thank you for an idea!
  • asked a question related to Visual Attention
Question
7 answers
The general data we found range between 10 - 20 min in young and healthy individuals, but the more accurate data relative to this tasks (visual attention) will be great.      
Relevant answer
Answer
A related control issue is sleep quantity, quality and circadian timing preceding the experiment.  We instructed subjects to sleep eight hours per night for the three nights preceding an experimental session, retiring and arising at the same times of night/day.  Also, we always tested at the same time(s) of day/night.
  • asked a question related to Visual Attention
Question
1 answer
Is there any study which investigates whether the use of mobile phone while driving impairs involuntary attentional responses? Involuntary attention may be fundamental while driving, as it may allow drivers to quickly direct attentional resources to a sudden change within the environment. 
Relevant answer
Answer
 Hi There, 
This is not directly my field and I am not sure if this is the kind of thing you were looking for but this study might be of interest (http://journals.sagepub.com/doi/abs/10.1177/0018720809337814?journalCode=hfsa) . 
It terms of the relevance to your question this study operationalised "involuntary" attention as the amount of fixations upon irrelevant information (billboards along the side of the road). They report that performing a secondary auditory task (similar to using an in car phone) did not effect the amount of involuntary attention directed towards to billboards. 
Critically though the secondary task in this study would probably be more similar to using an in car phone system rather than a hand held mobile device. Which would probably be more likely to divide visual attention.
  • asked a question related to Visual Attention
Question
4 answers
Are they combined together for object Tracking?
Relevant answer
Answer
Thanks "Tunc Guven Kaya" .
I believe Lukas-Kanade is an algorithm to detect optical flow and not texture.
Am I missing something?
  • asked a question related to Visual Attention
Question
2 answers
I have an allegedly operational ASL Eye-trac 6000 eye tracker but not the control software for it. Does anyone have a copy?
Relevant answer
Answer
Thanks
Best
Mickey Goldberg
  • asked a question related to Visual Attention
Question
1 answer
The dual-task will comprise of a simultaneous auditory digit span [DS] and a visual response time [RT] test. Participants will hear a series of five, seven and nine-digit sequences that are presented in a random order, but will remain consistent for each participant. Participants will be asked to verbally repeat back the digits 2-s after the onset of the final digit in the correct order, with a time-limit of 1-s per digit to recall the sequence, and awarded 2 points for every correct digit recalled in the correct place and 1 point for each correct digit in the incorrect location. 1 point is deducted for each digit recalled that was not in the original sequence. The accuracy score is then converted into a percentage. Whilst engaging the auditory task, participants will simultaneously partake in a visual RT test which involves an image of a small football being randomly presented for 200ms on a white background in one of four quadrants of a computer screen. Participants will be required to press an allocated button with their dominant hand in response to the stimuli as quickly as possible. Images will be presented in pseudorandom interludes between the words “ready” and “go” on the DS test, and will appear 750-1,000ms prior to the onset of the next auditory stimulus. During each test, 96 footballs will be presented and reaction time score will be recorded as the average time (ms) taken to respond to the stimuli across these trials. 
Any help would be much appreciated! So far I have four slides with the football image in each quadrant... so I've got a long way to go!
Relevant answer
Answer
Kate hi,
If you are not tied to ePrime, then you can easily implement such dual task in our software, EventIDE (www.okazolab.com). In fact, we have already have a template for a similar task that can be converted to yours by adding the football images.  As a bonus, you can use a voice onset detection and speech recognition that would allow to measure and evaluate the voice responses in the fist sub-task automatically. Optionally , you can  monitor participant's performance (e.g. a plot of the accuracy score) in real-time on the second monitor. 
If you like, I can send you a working dual-task template for a trial and help with adding the football images.
  • asked a question related to Visual Attention
Question
4 answers
distance from  the fixation gaze can affect attentional response in human attentional control system.is there any mathematical function to clarify it?
Relevant answer
Answer
For overt shifts of attention (ie saccades), and distractor effects there is a spatial structure. These have been described geometrically (ie mapped). See  Walker et al (1997)  J Neurophysiol 78:1108-1119 and cites. Not sure about covert shifts of attention. 
  • asked a question related to Visual Attention
Question
5 answers
Looking for some advice on the use of eye-tracking glasses on toddlers (18 months+). We are interested in the TOBII glasses but are also open to recommendations if experienced users have a more kid friendly product in mind. We're interested in testing in rich environments (science museums, labs, classrooms) and for use in human robot interaction and imitation tasks (amongst other potential uses). 
Any advice on whether commercially available eye trackers can deal with little heads would be very appreciated! 
Thanks!
Relevant answer
Answer
Hey Krisryn, 
We use the SMI Eye-Tracking glasses but they might be something too big and heavy for toddlers. I think the glasses are suitable for children aged 3 and over. The advantage is that the glasses are more or less calibration-free.
  • asked a question related to Visual Attention
Question
5 answers
I know some studies which show that stimuli considered as phylogenetically fear-relevant, such as snakes or spiders, benefit of a greater attentional capture than other stimuli, even for babies. I would like to know if some studies have investigated this effect with stimuli considered as ontogenetically fear-relevant, such as guns, in a young population without experience of those stimuli (babies ?).
Relevant answer
Answer
Hum… very good question. If it is about guns... A gun has a single usage… I mean, is there any other purpose to the object other than to kill or remind you that you can be killed? Guns induce the fear of death. I recommend Ernest Becker’s work on this topic… Guns incite and trigger violence, even if it’s only by their presence.
Best wishes,
Pedro
  • asked a question related to Visual Attention
Question
3 answers
I am interested in whether I should expect different interference effects (both behaviorally and in the EEG) when using 2 flankers surrounding the target compared to 4 flankers. I have found papers on different types of stimuli (arrows versus letters and so on), but I had no luck regarding the number of flankers presented.
Does anyone know of such a study, or has experience in presenting different number of flankers?
Relevant answer
Answer
Hi Stephanie,
I've played a bit with the number and aspect of flankers with some pilot studies. In one version I used two flankers which were also bigger than the central target, while in a different version I used four flankers which were also the same size as the target. Flanker effects (inc minus cong RT and errors) were larger for the four identical flankers versus the two big flankers. However based on that pilot I can't tell if the difference was due to the number of flankers or to their being of the same/different size relative to the target. Since you're doing EEG, you'll probably get stronger visual responses with 4 flankers, which you may or may not want based on the design of your study.
Hope this helps,
Francesco
  • asked a question related to Visual Attention
Question
6 answers
In many visual experiments, subjects are asked to memorize specific objects in the scene and then are reported to be tested with a post memory test. No further details about what kind of memory test is being conducted, and how it is carried out to measure participants memorization.
Any examples of post memory tests? Any advice?
Relevant answer
Answer
I just ran a study similar to this. I asked participants to remember either location or color of four squares before performing a visual search task. After the visual search task, i asked participants if they remembered the location or color by displaying a square in either the same location or color with a memory probe display half of the trials.
  • asked a question related to Visual Attention
Question
4 answers
Given an input image, I would like to know if some computational method can be used to extract the pre-attentive feature or a sort of early visual attention map. 
Relevant answer
Answer
As far as I know, most c-models are about goal driven or (covert) attention eye movement. But you can have a look at this: Gao & Vasconcelos 2004, Adv. in neural info process. systems, 17(481-488). As to bottom up models have a look also at Harel et al. 2007, Adv. in neural info. process. systems, 19, 545; Peters et al. 2005, Vis. Res. 45(8), 2397{2416. Although not on computation it's inetresting: Martinez-Conde, Macknik 2015 Perception, 884-889
  • asked a question related to Visual Attention
Question
1 answer
In the Emotion field, it is usually assumed that negative information is selected more efficiently than positive information. It is reflected in dot-probe tasks with a higher validity effect for negative information than for positive information.
I was just wondering, when you have a validity effect that is higher in one condition (not necessarely in the Emotion field), does it reflect a selection that is more frequent, or a selection that is stronger ?
If I rephrase, concerning the higher validity effect for negative information, does it mean that negative information are selected in more trials than positive information ? Or that negative and positive information are selected with the same frequency, but that the validity effect is stronger for negative information ?
Relevant answer
Answer
Thus, the negative and positive information. What is more important to select an adequate solution? It is necessary to consider several factors to answer the question. Firstly, the modality, the intensity and duration of action of a certain factor. Second, the functional state and the individual characteristics of the object (a person, for example), which is exposed to the factor. Finally, we must take into account the peculiarities of the current change factor characteristics in a particular time and space.
Please evaluare some papers in the attachment.
  • asked a question related to Visual Attention
Question
5 answers
Does anybody works or knows some studies that have investigated the interaction between the simon effect AND the validity effect of a spatial cueing task ?
Relevant answer
Answer
What steps have you taken so far to locate such studies?  Have you tried a Google Scholar search on <Simon effect and cuing>, for example?  I just did such a search and saw several articles that looked promising. 
  • asked a question related to Visual Attention
Question
8 answers
I am working with eye tracking system. I have a (raw) data of XY-coordinate points on screen that my eye looked at. Say, I use a pdf file on the screen. I want to find which word was there in a particular coordinate point. I am using Ghostscript for this purpose but I am not sure how correct it is. 
Can anyone please give me suggestion to do this? or suggest me with some interpreters that can find a particular word in a particular position on screen?
Relevant answer
Answer
i dont know of any software for this purpose, but you could measure your own x,y gaze points while you look at each word, doing so slowly and methodically in some sequential order (say left to right, top to bottom of the text page).  Use keystrokes or mouseclicks (that presumably would show up in your raw data file) to mark when you move your eyes from one word to the next one.  Then exclude from the raw data all saccades, then average all the fixation gaze point x and y between sequential keystrokes to obtain the average x,y for a give word.  Next, calculate the distance of points associated with each word from the center point for that word, and use some variability statistic of these distances to get a measure of the spatial area within which gazes to that word will occur.  For example, perhaps (Mean+2*standard deviation) might be  useful .  Since such gaze point areas for words are probably going to be elliptical, with bigger x variation than y, you might want to calculate the standard deviations separately for x and y dimensions (as opposed to 2D distances),. So for example, say Word A had Mean XA and Mean YA value, and std.dev XSA and YSA.  For each of your test subjects, you would find all fixation points in their raw data file that have X and Y values within the distances (2* XSA) and (2* YSA), from XA and YA, respectively -- these points would likely be fixations on Word A.
Now for this technique to have acceptable accuracy, you probably need to use a chin/head rest for your word-position calibration and subsequent subject testing, and position the words on the screen as far from each other as is feasible given your experimental design and purpose. 
  • asked a question related to Visual Attention
Question
9 answers
I am working with eye movement metrics to infer visual task based on observed eye movement patterns.
One of these metrics is total scanpath length. Based on what I have read, I need to find the length of movement (saccade) between consecutive fixations. I assumed that this can be easily calculated using Euclidean distance in pixels, then converting it into degree of visual angle. Is my assumption correct?
When I searched how a scanpath is computed, I encountered some programs, where the scanpath is compressed and also similarity between scanpaths is calculated. Do I need to compress a scanpath (by removing repetitive sequence) in order to correctly calculate its length? For what purpose the similarity between two scanpaths is used?
Are they any additional criteria that I should also consider?
Relevant answer
Answer
Hey Jihad!
There are is a myriad of different ways of comparing scanpaths which result in different measures and require different preprocessing stages. A good overview on measures you can use and there specific dis-/advantages is given in Holmqvist's book (chapter III section 10/11).
If you have a fixed amount of viewing time (and negligible data loss) for each participant, euclidean distance will serve you well. But this also depends on your research question (e.g. are you actually interested in the travel distance or is it more important to measure the area covered by the scan path).
For comparing scanpaths between different tasks (or views) I would recommend some more elaborate metric (e.g. recurrence quantification, multi-match).
Hope that helps, Greetings, David
  • asked a question related to Visual Attention
Question
3 answers
the protocol is somewhat like that- 2 set size was applied (12 and 16) in each set size 1 is target and remaining are the distractors. Total sets of trials 16- 8 with target present and 8 without target. I have used the eye tracking technique to assess the visuospatial attention while performing the visual search. What element of the eye tracking data will be useful to interpret the scenario ?
Relevant answer
Answer
This sounds like it fits the 'selective attention' definition of visual attention (Perry and Hodges, 1999; Tsotsos et al., 1995). How are you operationally defining attention for this task? Looking at the two prior references and how they measure for similar constructs might be useful here.
  • asked a question related to Visual Attention
Question
3 answers
Hey,
I am looking for research on spatial memory, but not on the scale of cities or buildings (spatial navigation), but rather a small scale spatial memory of objects in a room, hall or even just on a workbench. 
Preferably done in a real world setup and not on desktop. Possibly also done in VR (rather on HMD than just desktop VR. could also be CAVEs).
Relevant answer
Answer
Hi,
Please see publication listed on my profile, first author Jonna Nilsson, which looks at allocentric and egocentric spatial reference frames using a virtual reality programme under fMRI conditions. The specific aims of this research however was to compare a group of patients suffering from depression with that of health controls. Not sure if that was what you were after.
Thanks
Lucy Stevens
  • asked a question related to Visual Attention
Question
2 answers
I'm analysing euphemisms in commercials. Commercials are multimodal. I analyse text/speech because it is relevant to me. But I also take into account the video, trying to make inferences about what I can call "visual euphemisms". I need to explain why I wanted to look at the video too.
Then, can I say that I'm analysing the video feature "due to its expressiveness and attentional relevance."?
Meaning that: while a magazine article can include images but they're neither extremely relevant to the message, neither they take my entire attention; in a commercial I really really put a lot of effort in looking at the video, so it becomes the focus of my attention?
Relevant answer
Answer
Dear Martel,
There are a number of reasons for this, and they are best understood by first reading the material on attentional focus provided in the enclosed link.   Consequently, there are some key factors at play in this case:
(1)    The video generates salience through motion that can take a dominant path for shaping attention.   This is a key bottom-up driver, and its implications are discussed in the following point.
(2)   The magazine graphic does not have this motion salience, and must therefore rely on static salience issues to attract your attention (e.g. colour, layout, the utilisation of human faces, and other directionals, etc.).   Additionally, this has to compete against the top-down drivers (see Task and Plans in the enclosed link), because the person is intent on reading the article, and therefore more likely to suppress the processing of the non-pertinent graphical elements.   On the other hand, in the video, we tend to have our top down drivers focussed on watching the screen, so the salience of the video advertisement reinforces the top-down drivers.  
Consequently, we tend to apply more overt and covert attention to the video, and this is particularly true if the video is well designed to support the other drivers discussed in the link.
I hope that this helps to answer your question.
Wishing you all the very best,
Bruce Hilliard
  • asked a question related to Visual Attention
Question
9 answers
I am looking for any methodology on how to find an influence of road advertisements (advertisements along the road) on driver's behavior.  We are about to buy an eye-tracker and have no experience in this kind of research. Can anyone suggest me where to find any research on this topic?
Relevant answer
Answer
Hi Joanna, such studies have been carried out by our group in Cracow under the direction of Professor W. Błasiak and published in monographs in Polish. Other work groups on eyetracking are RG, eg .:https://www.researchgate.net/publication/282401858_Eye-tracking_verification_of_the_strategy_used_to_analyse_algorithms_expressed_in_a_flowchart_and_pseudocode . p.peczkowski@wp.pl.
Best wishes, Paul
  • asked a question related to Visual Attention
Question
7 answers
Supposing that if a pp presentation will contain few information, people will pay attention and will be more quiet than with a full pp presentation with many informations.  Thought about the cognitive overload, double task and multisensorial information that could decrease attention... Do you know any paper about this subject ?       
  • asked a question related to Visual Attention
Question
5 answers
I'm running a fairly simple pupillometry experiment, with fixation preceding stimulus to capture the baseline. I've been able to capture differences between my two conditions during the stimulus (for long presentations) and after stimulus (for short presentations)...but I have differences between conditions in the fixation before the stimulus even comes on. What could possibly be causing this? Trials are completely randomized, and yet these differences are consistent across participants.
Relevant answer
Answer
Hey Steve!
Besides Luke's very good suggestions, I assume you controlled for eye movements (saccades and their preparation can affect pupil size). 
In addition, anticipation does affect pupil diameter (and participants are generally very good in building up any sort of anticipation from only the tiniest clues). If your fixation periods were different or by chance longer in one condition than the other, this could also lead to a difference in pupil diameter. 
Greetings, David
  • asked a question related to Visual Attention
Question
2 answers
Would love to ask several questions about some attention bias measuring tools - VDP, Modified food Stroop, one-back visual recognition task, food ANT (attention network test) and other techniques. Anyone has any experience with any of these?
Many thanks!
Vicki
Relevant answer
Answer
Dear Vicki Idzinski mam, It reduces the amount of memory required to store the pre-computed tables while maintaining the same success rate and online time. Both the DP method and VDP method can be applied to Hellman tradeoff or rainbow tradeoff.
  • asked a question related to Visual Attention
Question
14 answers
does anyone have the color-shape task (e-prime) and may assist with the matter?
much apreciated and thanks in advance!!
Relevant answer
Answer
Hi David, thank you very much for your kindness and willingness to help. Eventually we have programmed an e-prime color-shape switching task ourselves. Thanks again!!! In case I can help in any thing please don't hesitate to ask. 
  • asked a question related to Visual Attention
Question
6 answers
I am working on the detection of creatures in the benthic environment. Data set of the 5000 images is available. Images are pretty much high resolution and low noisy. I have read some of the papers having detection rate of maximum 85% accurate, whereas we have a target of 95%(90% in worst case). Suggest some state of art techniques that could help me in my research. (except deep learning please.)
Relevant answer
Answer
I'd like to sincerely recommend you take look at our newly released image analysis software MIPAR at http://MIPAR.us. It has proven very capable at solving a variety of challenging segmentation problems and I'd love for it to help you! It's difficult to suggest particular methods without seeing an example image, but if you are able to post or send one, I'd be happy to see if I could devise a recipe in MIPAR to find your features.
Cheers,
John
  • asked a question related to Visual Attention
Question
6 answers
In many visual attention and visual working memory tasks, separate groups have distinct methods to set the time when a cue presents , prior to stimulus appearing(pre-cue), or after stimulus display(post-cue).
What are the significances for us to manipulate this? How could cue occurrence time influence one experiment?
Relevant answer
Answer
Dear Arturo
Thank you for your help.O(∩_∩)O
  • asked a question related to Visual Attention
Question
4 answers
Has anyone had problems of synchrony between an eye-tracking system and the stimulus delivery software sending log messages to the eye-tracking system logs? And if so, what the possible sources of the problem could be, how to avoid the problem, and how to deal with it?
We have an MR-compatible eye tracker from MR Technologies hooked onto Arrington Research's ViewPoint EyeTracker software on one PC. On a different but connected PC, Neurobehavioral Systems Presentation software controls stimulus delivery. I have Presentation communicate to the eye tracking software's logs when my video stimulus starts and ends because I was told it was more reliable to manually start the eye tracking system rather than trying to control it through commands triggered from Presentation. As far as I understand, the eye-tracking system logs a line of data every 33ms even when there's tracking loss. I expect that I should have the same number of eye-tracking data lines between my video log markers -- so if my videos are 33 fps, I assume I should have the same number of eye data points as frames for a given video -- is that correct?
However, the eye-tracking data corresponding to a video is on the order of up to 3 seconds (1-80 data points) longer than the video. For example, for a random video and according to the Presentation log files for some random 2 subjects:
video X: 102 frames (25fps) = 4.1 sec length of video (according to Presentation log files and video)
sub1: 182 lines eye data @ 30fps = 6 sec of data marked as recorded during the  length of video
sub2: 166 lines eye data @ 30fps = 5.5 sec
I am very hesitant to assume that the "video start" log marker I had Presentation send to the eye-tracker system log files really corresponds to when the video started (and then take only as much eye data as video length, ignoring the "video end" log marker -- or could this be a safe assumption?
Thanks in advance for any help and explanations!
Relevant answer
Answer
Dear Gina,
we used the Arrington ViewPoint software in our lab with the 220USB system.
Some thoughts why you might get different frame numbers:
1) It is not a constant rate camera, so actually it runs at around 220Hz but not at exactly 220Hz. That may be the same case for your camera (although there might be less variability in the 30Hz version).
2) I found the marker insertion in the ViewPoint software rather unreliable (we used the remote Ethernet connection and the markers where sometimes dropped or capped at the start/end even with a security margin of starting the eye tracker some hundred ms before the experiment).
In conclusion, if your video start/end markers do not coincide with the data requisition start/end markers in the ViewPoint protocol, you are safe to discard the rest of the eye data. If the they to coincide, then the data might probably be corrupted.
Anyway, keep in mind, that the variable rate of the eye tracker will not guarantee you a one-to-one correspondence between video frames and eye tracker samples (the important property is the "delta time" in the ViewPoint protocol which gives you the the time interval between successive samples; if you have suffered a rate change, there will be variability in these numbers).
Hope that helps, Greetings, David
  • asked a question related to Visual Attention
Question
6 answers
I am trying to understand whether there is a correlation between how drivers' visual scanning of the road and their behaviour after they encounter a potential collision scenario. I have been working with Percentage Road Centre (PRC; Victor, 2005), but this gives me 5 individual measures. I was wondering if there was some established model, ratio or coefficient to assess the spread of eye fixations within a given time frame.
Thanks in advance!
Victor, T. W. (2005). Keeping eye and mind on the road. Doctoral thesis, Uppsala University, Uppsala.
Relevant answer
Answer
Hi Tyron,
Our lab has found that the standard deviation of gaze positions provides greater sensitivity than PRC, and is generally simpler to calculate. See the attached pub.
-Jon
  • asked a question related to Visual Attention
Question
11 answers
Hi
I have some experience with the EyeTribe(ET) and I wonder whether anybody has similar problems as I do/did, so, my questions are:
How many times do you repeat the calibration until you have a reliable one?
How long are your sessions if you use it in user studies? or
How long can you collect data with the ET? Does it automatically shut down after a while?
What is the viewing distance?
Do you use any extra tool to stabilize user's head or to preserve the calibration and the viewing distance?
Do you use ET more for post analysis or for real-time interaction?
Thanks
Relevant answer
Answer
Hi Kenan,
I used The Eye Tribe-Tracker for three experiments with a total of about 120 participants. The combination with ogama 5.0 software for post analysis, which I used, worked well.
The mean session time for using this combination was about 20 minutes.
I used a 12 point calibration with no extra tool to stabilize user's head and had to exclude about 8% of my participants because of low data quality (e.g. mascara-problems..)
The viewing distance was 60 cm using a 1280x1024 Display.
Repeating of calibration:
In about 70% i had to calibrate only once..
in about 25% i needed a second one.
in about 5% i needed more than 2 calibrations
I had some shut down problems with my EyeTribe-Ogama combination, when the mouse curser left partcipants screen.
May be shutdowns depend on computing power. I have an i7-machine and no further problems even with long session duration.
Best regards,
Matthias
  • asked a question related to Visual Attention
Question
5 answers
Which questionnaire is used to measure the practice level of Focused Attention Meditation?
Relevant answer
Answer
Try Travis and Shear from 2010 in Consciousness and Cognition
The article is called:
Focused attention, open monitoring and automatic self-transcending: Categories to organize meditations from Vedic, Buddhist and Chinese traditions
at:
  • asked a question related to Visual Attention
Question
7 answers
I made a study with two types of targets (3 shematic faces or 3 schematic houses). The task where participants had to detect the 3 schematic houses and ignore distractors (photo of houses or faces) seems to be more difficult (reaction time bigger, more false alarms) than the task where participants had to detect the 3 shematic faces.
As the heterogeneity of the 3 schematic houses is more pronounced than the heterogeneity of the 3 schematic faces, i wanted to know if some studies have investigated the impact of the heterogeneity of the target on attentional capture by distractors ?
Relevant answer
Answer
We are practically hardwired to pay attention to faces, which makes sense from the perspective of evolution.  Houses are recent in our history.  It would thus be easier to resist distractors when looking at pictures of faces than when looking  to pictures of houses.  That might account for your results perhaps more than heterogeneity.
  • asked a question related to Visual Attention
Question
8 answers
We want to compare the encoding/retrieval of three different types of scenes in an fMRI study. To make sure that any differences are not due to the overall complexity of the stimuli, we would like to equate our pictures for complexity. What is the best way to do this?
Relevant answer
Answer
There are several methods. Please, evaluate one of this.
  • asked a question related to Visual Attention
Question
4 answers
We are working on a mobile system/app which monitors user interaction (such as screen unlock and attentiveness to notifications) and I was wondering if there are any suitable functions measure attention taking in account reaction-time switching costs and so on. we are not intending to use any physiological or brain sensors. All measurements will be based on user interaction.
Relevant answer
Answer
Besides time switching and all that jazz, there are two other areas of interaction that could be exploited:
The first is linguistics! If the software can determine the user's normal linguistic profile, through any number of methods (or through a combination of geographic and racial profiling), one could then monitor the minute changes in attitude, impulsivity, and coherence throughout the user's writings and readings. These data sets could them be analysed to provide information even on the user's neurotransmitter levels available in the extra cellular structures in the brain.
The second would be facial and motor monitoring. I wouldn't be the best person to provide info on this, but it would require gathering information on facial twitches as well as blink rates, eyebrow shaping, and lip movements...
Are these more of the kinda things your looking for?
Jon
  • asked a question related to Visual Attention
Question
12 answers
I have an eye-tracker dataset that includes pupil size information. The data was recorded primarily for examining eye movements and fixations, but I am interested in looking into whether the pupil size data says anything interesting about cognitive effort during a peripheral detection task. However, I am relatively new to pupillometry and having read some of the literature around pupil dilation and cognitive effort/attention, I can't identify a standard approach to cleaning and analysing a pupil size dataset (e.g. how to smooth the data, deal with blinks or missing data, how to identify outlying datapoints etc.). Is there such a standard approach or method?
Relevant answer
Answer
Hi Jim,
There's a pretty straightforward way of pre-processing, which entails the following steps:
1) Interpolate blinks from the signal. These are characterised by rapid declines towards 0 at blink onset, and rapid rises from 0 back to a regular value at blink offset. For a blink removal algorithm, see Sebastiaan Mathot's approach (link attached), which is a good way of doing it.
2) Reject artifacts, e.g. by Hampel filtering. The link to a Matlab function is attached.
3) Optionally smooth the data (depending on your parameters, the Hampel filter might actually act as a smoothing function too). A popular approach is to use a moving window, e.g. a Hanning window (see attached Wikipedia link for clarification).
4) Divide the signal (e.g. timepoint 0 ms to timepoint 3000 ms) by the median pupil size during a baseline period (e.g. timepoint -200 ms to timepoint 0 ms). This is an important step, as most trackers tend to work with arbitrary numbers, whereas most papers report changes as a proportional change.
N.B. Please note that some trackers report the pupil AREA, whereas others report the pupil DIAMETER. The proportional increase over time of the AREA will be a different function than the proportional increase over time of the DIAMETER, because the two are not linearly related!
EDIT: Additionally, make sure that no eye movements happened in the intervals from which you collected pupil data. And be sure that your stimuli are equiluminant. This is important, as systematic changes in luminance between conditions will result in a systematic difference in pupil size between conditions because of the pupilary light response. For a recent paper using pupilometry, see our recent publication in Journal of Vision (attached). The design of our experiment is a good example of how to capitalise on the pupilary light response to test spatial attention. It's also a good example of an equiluminant experimental paradigm.
  • asked a question related to Visual Attention
Question
9 answers
Or are general tests for dementia administered such as the MMSE and MoCA? I am interested in all tests that measure cognitive function, but particularly visual attention.
Relevant answer
Answer
Personally, once testing (or even simple conversations with the person) has identified that all is not right in memory/judgement/decision-making I find that collateral from family/carers frequently indicates whether the changes have been gradual over time or if there have been 'steps down' in function. Especially if they are told how TIAs can appear to an observer. Hindsight, as they say, gives us all 20:20 vision and when looking back family members will frequently recall a day when their relative may have been exceptionally sleepy, hard to wake, confused but following a good night in bed were recovered, although not quite back to where they were before.
Don't forget that people with vascular dementia often display moments of insight which is different to that displayed by suffers of AD.
Also, as we age I firmly believe that we all experience vascular events - smoking, diet, cholesterol etc combine to fur up those arteries and eventually there will be small blockages in the brain.  It's all about reducing the risk of a 'major' event which could alter our personality and/or performance.
My advice - do tests by all means, but get good collateral, use your gumption and look after your arteries!
  • asked a question related to Visual Attention
Question
13 answers
I believe some metrics related to the eye, such as pupil dilation, may give an indication of the extent to which something being looked at is being actively processed. However, I am interested in ways to determine whether someone is paying attention to (cognitively processing) what they are looking at in natural, real-world conditions, where changing light levels may make it difficult to use pupil dilation as a measure. I am therefore wondering if there are any tell-tale signs from eye movements that can reveal whether something is being actively processed and has some cognitive importance to the observer.
For example, research on inattentional blindness shows that just because something in our environment is fixated does not mean it is perceived or processed. Also, research has been carried out about mind-wandering during reading which suggests eye movements may be qualitatively different during periods of mind-wandering compared with when what is being read is being processed. Are there any similar findings for natural situations such as just walking through an environment?
Relevant answer
Answer
Bear in mind that covert shifts of attention can happen in the absence of eye movements (which is why they're called covert).  See for example the introduction of the article linked below.
  • asked a question related to Visual Attention
Question
6 answers
With a discrimination task I know how to work out whether performance for a particular participant is better than chance or not. This is just done by computing the observed accuracy with the expected frequency of correct responses based on chance (dependent on the number of response options in the task)  with the number of observed correct responses.
However with a signal-detection task I am not sure what to do. How does one determine if, for instance, a d-prime of 0.04 indicates responding that is better than would be expected by chance or not? I don't think that one can use the normal binomial formula (as with a simple 2AFC discrimination task) because the number of given present and absent trials given is not necessarily equal in a detection tasks; the expected frequencies of a random responder would also presumably differ depending on their level of  bias towards 'yes' or 'no'. So the whole thing is far more complicated. Presumably this is a fairly common issue with detection tasks-so there must be a formula somewhere to deal with it..
Relevant answer
Answer
To determine if the d values for each observer in each condition is significantly greater than zero (chance), you can use Marascuilo's test ( Marascuilo, L.A., 1970. Extensions of the significance test for one parameter signal detection hypotheses. Psychometrika 35, 237–243).
  • asked a question related to Visual Attention
Question
5 answers
I would like to know if anyone participated in a study, with a group of people, exposing to a set of IAPS stimuli and recording several physiological measurements. I would like to share experiences.
Relevant answer
Answer
Dear Hakan Boz and all others participating, the issue I have with IAPS is that, even though it is the most interesting set being used for research in emotional assessment, the pictures are somehow outdated; in about 20 years there are changes in many aspects of society that make us change our response to proposed stimuli. For instance, some pictures that 20 years ago could be thought to trigger sexual arousement may not be the adjusted for the concepts we have this days. But then the question arises; if that was a standard used many times, why change it and lose the chance of comparison with previous data collected? on the other hand if those are not adjusted to current patterns, why keep using it? So in my opinion, the set could be updated with some newer pictures. Saying that, i must say that i don't know if those colleagues at U.Florida at the National Institute of Mental Health have added any new pictures to IAPS collection.
  • asked a question related to Visual Attention
Question
4 answers
It is known that attention is important for the maintenance of accurate smooth pursuit of a moving object so I think that if the pursuit velocity is increased the attention level should also increase, but I am not 100% sure about it and I would love to get some feedback and suggestions of papers confirming one of these possibilities.
Relevant answer
Answer
First off, thanks to Pierpaolo for passing along the study.  As for your question Nelson, there are some clarifications I would need from you before putting forward something beyond this post.  By "pursuit", do you mean visual tracking?  There are all kinds of studies on the mechanics of visual tracking, but the notion that the actual physical act of following the path of a moving object can somehow be "sped up" speaks more to the anticipatory set incorporated, as prediction takes over based on physical principles like gravity effect remaining constant.  
The prediction sets (heuristics) used are an indication of the minds way of reducing cognitive load. This would be difficult to correlate with degree of focus, which I'd also need to confirm is what you mean by "increased attention level".  From a 'volume of information processed' standpoint, the areas of the brain adept at chunking  information that are in use, means you might compromise on some of the preciseness implied by velocity increase, in order to keep the thing you are tracking "in frame". 
One of the interesting things the paper Pierpaolo passed along is the addressing of the phenomenological nature of moving objects generating neural processing that is different from processing that occurs when gathering information from stationary stimulus (see above, the difference between still frame and re-framing).  So what you may encounter is not an elevation of attention volume (as in attention level), but first a categorization of attention based on stimulus type, and then a determination of attention required.  Depending on your familiarity with the stimulus, a novel still frame may require more attention than a re-framing of a fast moving, but familiar (and predictable) moving object. As you may already be aware, that is the study of Receiver Operated Characteristics and I have some studies on the role that familiarity plays on awareness should that be of help.  In the meantime, I've attached a study that may be of interest.
  • asked a question related to Visual Attention
Question
10 answers
Dear All, I am applying the principles of human attention (visual stimulation) to machine vision, so it would be great if you help in answering this Question,
how  is human vision attracted to an object (region) in an image?:
1- the object is irregular compared to the background.
2- the object is in the middle of the image,
3- the object is with high contrast,
4- the rarity in colour,
5- Larger in size
6- others: please state it.
thanks for your cooperation.
Relevant answer
Answer
Motion onset is a strong attentional cue.
  • asked a question related to Visual Attention
Question
5 answers
I need a measure which results in 0 or 100 if 2 RGB images being tested are same, and give different value if there is some dissimilarity in them. The number should represent mutual information.
Relevant answer
Answer
Regarding images, the mutual information means how similar information is available among them. Therefore, a good approach is to compare images according to the CBIR (Content based image retrieval) methods. These methods return a similarity index. Using this index you can determine the similarity of a given image to any image in a database.
But if you want to see how two given images are similar, you may use the MSSIM *mean structural similarity index measure) or SSIM measure.
  • asked a question related to Visual Attention
Question
7 answers
We are considering to obtain a Jazz eye tracker to be used in conjunction with our existing BioSemi EEG amplifiers. I would be very interested in hearing what experiences other labs have made with such a setup. How convenient and reliable is it to use? How much calibration is necessary? Does the head-mounted eye tracker cause artifacts in the EEG? Is it comfortable for the participant to wear? Is there an easy way to feed back the eye movement data to the stimulation PC during the experiment (e.g. using the Psychophysics Toolbox) or is it suitable mainly for offline analysis together with the EEG data?
Relevant answer
Answer
I have used this system for some quick experiments in clinical setting. It is reliable, as we also correlated the outcome with search coil system. I agree that vertical eye movements may be an issue, but if you are interested in saccade velocity and dynamic properties of saccades, it should work just fine. There are no issues with horizontal system. The major drawback of this system is that it gives conjugate eye position (averages both sides - i believe). In this case, it is not an appropriate system to measure eye movements of a subject who has dysconjugate eye movements.
thanks,
  • asked a question related to Visual Attention
Question
6 answers
memory and comprehension
memory and pheme awareness 
memory and visual attention 
memory and rapid automatized naming
Relevant answer
Answer
Thanks ! I use these links !
  • asked a question related to Visual Attention
Question
8 answers
We recently bought an iCub2 with enhanced communication abilities (see Nina below). We are currently working on visual attention and try to characterize the perception of the robot's gaze direction by human observers. We got surprised! The morphology of robotic eyes with no deformation of the eyelids and palpebral commissure strongly biases the estimation of eyes direction as soon as the gaze is averted.
Are you aware of any study (similar to what SAMER AL MOUBAYED and KTH colleagues have done with Furhat) on robots?
Thank you in advance for your help!
Relevant answer
Answer
I knew the work of the KTH people with Furhat... similar to yours. Nice piece of work. Lamp avatars are a bit different from robots: the eyes are active. Sounds strange that nobody did what you did with real robots... we keep in touch.
  • asked a question related to Visual Attention
Question
5 answers
I am interested to analyze large volume of images in terms of early visual perception. For this reason I am interested in analyzing fixation points (the tracked human being fixation points for a given image) with respect to visual saliency analysis approach.
Relevant answer
Answer
You will find many resources following the link "databases" on this web page
7 datasets are available covering several conditions (3D, video, still images, including some distortions and various tasks)
eyetracking raw data are available, plus pre processed data (saliency map ...)
  • asked a question related to Visual Attention
Question
40 answers
The visual system is perhaps the best understood visual system of mammalian brain. However, one question has always bothered me...
How can our perception of the visual world (e.g. flowers in a pot, a deck railing that is horizontal, trees in the distance) all appear to be solid and stationary while our gaze (and presumably our entire representation of the visual world in area 17 of visual cortex) is fluctuating wildly in response to eye and head movements? Wouldn't this require re-mapping of the visual world on the neocortex with every saccade and head movement?
Furthermore, how can we clearly discern movement of single objects within our visual environment (a flying bird in our peripheral vision) when the whole visual world is gyrating with every saccade and rotation of our head? Is this a simple problem that I somehow just didn't hear the answer to?
Relevant answer
Answer
Response to the original question:
When you decide to move your eyes, the same signal which activates your eye muscles is also sent ( by recurrent collaterals) to the brain to prepare it (by the same amount and direction of the movement) to anticipate the displaced image.
If instead of consciously moving your eyes , you instead move the eye by gently tapping the side with a finger, the brain cannot anticipate or correct for this displacement . The world will appear to jerk.
Conversely, if you send a signal to your brain that your eye is going to move, but the muscle is, in reality, paralyzed so that your eye remains in the same place, an experience of movement will also occur, since the image did not appear in the updated location as expected.
Objects which move on their own, tend to appear in different places with respect to a stable background - like a person walking in front of the building- . the background doesn't move, but the person keeps appearing in different places with respect to the building , so we infer that the person moved but that the building is stable. .
Actual movement is not necessary to experience the sensation of movement. Check stroboscopic effect (motion in film), phi phenomenon (apparent motion when light jumps from one light bulb to another or the autokinetic effect (when the precise position of an airplane against the sky appears closer or further- because it cannot be evaluated against a background. )
  • asked a question related to Visual Attention
Question
5 answers
I'm interesing in how attention select and shift when children read the picture books.
Relevant answer
Answer
There is a lot of research on how already infants react on gaze cues (e.g. Farroni, T., Johnson, M. H., & Csibra, G. (2004), but Frischen, A., Bayliss, A. P., & Tipper, S. P. (2007) provide a good overview about attention and gaze cues in general, one section considering development in particular).
There is much less literature on gesture perception. Mostly, gestures and other bodily social cues are regarded as "only" modifying direction of social attention by gaze cues. However, this opinion is under debate and there is evidence that head rotation as well as gestures might be just as important as gaze cues (Langton, S. R., Watt, R. J., & Bruce, V. (2000))
However, it would be important to know, in order to be able to give an answer to your question how all this might relate to children reading text books: Are you interested in the relative contributions of gaze and gestures? Or their interaction? Or would you like to study both separately?
  • asked a question related to Visual Attention
Question
3 answers
global/local paradigm, emotional Stroop
Relevant answer
Answer
I suggest the following recent papers: Elsenbruch S et al, Gastroenterology 2010; 139:1310-9 and Tillisch K, Gastroenterology 2011; 140(2):407-411. Perhaps, they may help to answer your question
  • asked a question related to Visual Attention
Question
6 answers
Would RT measurement be too imprecise/variable using a touchscreen repsonse compared to a simple button press? I aim to measure attentional biases to/from emotional faces.
Relevant answer
Answer
Hi Margaret,
We recently combined a dot-probe task with a touch screen based approach/avoidance task, which sounds similar to what you are considering.
You can check out the details here: Sharbanee, J. M., Stritzke, W. G. K., Wiers, R. W., & MacLeod, C. (2013). Alcohol-related biases in selective attention and action tendencies make distinct contributions to disregulated drinking behaviour. Addiction, 108, 10, 1758–1766.
Hope that helps!
  • asked a question related to Visual Attention
Question
2 answers
We are currently conducting a business psychology project. My group has the following topic: 'The relationship between death-salience and visual attention'.
Within the sector of the terror management theory (TMT), we are specifically researching whether there is a relationship between "threatening images" and human visual attention. In other experiments a relationship between images of physical-injury and the gaze-duration (Journal of Experimental Social Psychology, „Looking away from death: Defensive attention as a form of terror management by Gilad Hirschberger, Tsachi Ein-Dor, Avi Caspi, Yossi Arzouan, Ari Z. Zivotofsky).
We are planning to use an Eye-tracker and see if there is a relationship between threatening images and visual attention (gaze-duration, focus, focus-time, etc.). The participants will be subliminally primed in advance, and afterwards exposed to a set of images. Within these sets there will be neutral images mixed with one threatening image (5 neutral, 1 threatening). The images will be generally similar regarding colours, saliences, etc. but only one will represent a threat. The threatening images will be tested in advance so we have validated threatening images.
As we are still preparing our experiment I had the thought of getting some feedback in advance, as our group has certain points we are still researching and are not quite sure about.
Questions:
1.) As described above and researched within TMT, we assume we are working in the proximal sector. But as we are not quite sure about the actual border between the proximal and distal approach, we would like to get some feedback.
2.) Looking for threatening images I have found several databases offering 'license-free' usage, but I am not sure about that. Does anybody have any experience with a free database online for researching images in experimental contexts? Or any other idea how/where to get license-free images for the mentioned experiment (besides shooting them ourselves)?
3.) We are planning to have 30 participants in the experimental group and 30 in the control group. We might have to cut down to 20 in each group, if we have trouble getting participants involved. What you would consider a minimum amount of participants?
4.) If we can confirm the relationship , we are still looking for practical applications of our data. We assume it will be usable in a marketing context with ads, commercials, etc. But we also think it might be useful in the context of traffic psychology (warning signs, etc.). Anyway, we want to keep this point open as we still need to do the actual research. Even though we would like to hear other ideas of practical usage of the outcome of our experiment.
5.) We are planning to use sets of 6 images with a division of the images on the screen surface, without offering an automatic centering or other effects, which might be aroused. So our concrete idea is to use 2x rows of 3 images. Here we need feedback about the arrangement of the images, etc.
6.) The experiment mentioned above, from the Journal of Experimental Social Psychology, used subliminal priming methods. We are planning to also use these, as we think the risk of an obvious death-priming should be reduced. Here we are having trouble of finding validations for subliminal priming. Any feedback here would be helpful.
7.) Our last question would be: We think using black and white pictures might be helpful in order to avoid salience-effects regarding the colours (e.g. focusing on red, etc.). Does anyone have ideas here?
Thank you very much in advance for your feedback!
Relevant answer
Answer
Hello,
I have some experiences with TMT explorations that you might find useful.
#2 - I have found some very good terror oriented images in IAPS concerning blood and death directly. You will obviously also get the valence and arousal information in the IAPS dataset. It is in principle free and all you need is an access link (but you don't need to pay money).
#3 30 would be ideal as some might be thrown off later
#5 how about using 4 images to keep the distance from the four corners of he screen constant? (2 rows, 2 columns)
# 6 Subliminal priming often gets tricky with eyetracking. You might want to experiment in the subliminal range (30-70 ms) and also slightly higher (150ms+) in the piloting.
# 7 Yes, b/w has the benefits but I have found for terror/fear images, colors have a great advantage. So, it would be best if you can run the color images through a filter that standardizes the brightness (saliency) etc.
  • asked a question related to Visual Attention
Question
8 answers
Many scientists believe such information is deleted by attentive processes, but there is no proof for that.
Relevant answer
Answer
To come back to your question of not entered data. I would like to mention that many researchers now believe that attentional processes are mediated by feedback between lower and higher visual areas. In other words features and representations come into the attentional domain when recurrent activity develops between higher and lower areas for particular features. It's possible that this recurrent activity is the mediator of conscious working memory (see model of Dehaene). So visual input that leads to activation of neurons up to the prefrontal cortex, e.g Frontral eye fields, probably involves phasic responses within 100ms and die out after 200ms, if they do not attract or engage our attentional systems. Attentional modulation even down to V1 operates mostly after 200ms.
Interestingly these unconsciously activated features can later be brought in to attention (iconic memory) if there has not been new visual input causing backward masking. The reason is probably that neural activity lingers on and becomes suppressed only after the activation of other neurons.
  • asked a question related to Visual Attention
Question
1 answer
Is there any EEG database for distraction or attention in driving? I've searched for nearly one week but find none.
Relevant answer
Answer
Hi,
I suggest you contact Benjamin Blankertz team. Have a look at their paper entitled 'EEG potentials predict upcoming emergency brakings during simulated driving'.
  • asked a question related to Visual Attention
Question
20 answers
This is a clinical study examining how the delivery of visual information format - static, video, presented in person - may affect adolescent memory. fMRI scans would be taken as subjects recalled the type, volume, and temporal memories associated with presentation and calculation of skill-testing questions delivered four weeks prior. The subjects would be focused on the calculations they were to perform, while the researchers would be examining if there are different neural substrates elicited as the person recalling the testing event verbalizes what they can recall.
To the best of our ability, we have not been able to find research that explains what, if any, effect the medium of visual information plays in adolescent memory. If there are folks out there who have something that we should be aware of, we would be grateful.
Relevant answer
Answer
an interesting aspect is the episodic component of your memory task. You will present a math task either static, as video or via a person. When testing four weeks later you may measure recall (episodic memory of person), as well as recognition in trials of static presentation. Check out: http://psycnet.apa.org/journals/bul/133/5/800.html
And there are individual differences in recall/recollection simply because some encode very visual and vivid whereas others abstract immediately (getting rid of all episodic information) - but that might be not what you are after ?!
  • asked a question related to Visual Attention
Question
5 answers
I am conducting a study that will present participants with categories of stimuli to determine the influence of stimulus valence on attentional processes. To avoid low level confounds, it is important that the stimuli be matched on physical attributes, such as luminance, contrast, and spatial frequency. Any advice for yielding these indices would be very much appreciated.
Relevant answer
Answer
If you can characterize / calibrate your display, it should be possible to convert the value of each pixel to a luminance level. There are different ways to calculate contrast, but what seems to affect behavior most consistently is simply the standard deviation over the luminanve values (note that Michelson or Weber contrast don't really work for natural stimuli), so that could be a first contrast measure to control. There is a paper by Peli who proposes a more complex measure, which might be worthwhile to check out. Color contrast may also affect gaze, and to control for this, you would want to convert (RGB) values to some color space where color is independent of luminance. We usually use DKL space which is also supposed to be physiologically correct. (Luminance and luminance contrast can also be corrected for in DKL space independent of any color corrections.) I've never done any corrections on spatial frequency, but a 2D FFT performed on a luminance-only version of your stimuli should be a good start. HTH.