Science topic
Visual Attention - Science topic
Explore the latest questions and answers in Visual Attention, and find Visual Attention experts.
Questions related to Visual Attention
I'm doing a research proposal and want to compare bilingual people to monolingual people on a dot perspective task.
- The first IV (IV1) will be language ability with two levels: monolingual (control) /bilingual
- The second IV (IV2) will ONLY be applied to the bilingual group: Ps are informed the avatar is bilingual or not (so two levels again, repeated measures with counterbalancing)
The DV is reaction times in the dot perspective task.
I am just wondering how I would go about analysing this? I was thinking an ANOVA, but as the control group are not exposed to IV2 do I just simply compare the means of all groups?
I want to compare
- Control group reaction times to BOTH levels of IV2 combined (overall RT for bilinguals)
- Control group reaction times to each level of IV2
- Level 1 vs level 2 of IV2 (whether avatar is said to be bilingual or not)
Is it best to split this study into 2 experiments or is it possible to keep it as one and analyse it as one?
You can suggest some literature too
I am looking for papers that provide explanation analytically as well as mathematically
Dear RG community, I open this pedagogical eThread to have a friendly, and sincere discussion on the need to teach visualization in pure sciences starting at the high school level.
The idea comes to mind after finishing a short MOOC course on the need for visualization when teaching mathematics at a basic level, offered at the Open University in the UK.
I asked myself this morning the following elementary questions:
- "Do I really know how to vizualize a complex number z = x + i y?"
- "Did I try it myself at least once in my life as a science teacher to vizualize it on my own?"
I hope, you find it enriching. Thanks in advance to all participants.
Using visualization in maths teaching CC licensed at:
Tools & channels:
"Dutchsinse" recommended by a friend, Dr. Stephen L.
Looking for Cooperation : 3D Brain Activity & Visual Attention
I am looking for Research Cooperation on understanding Vision and Brain activity with a new ultrafast and high resolution ultrasound system (5 microns, 20 000 images/s) developped in France.
I am professor in brasilian federal university specialised in NeuroScience.
The project must buy this ultrasound system
Hi everyone,
In eye-movement tracking studies with babies, it is sometimes difficult to get a perfect calibration. I wonder if there are well-established criteria, thresholds or recommendations for excluding calibrations.
Any input - tutorials, method reviews or drawn from researchers' own experience would be very helpful.
Second, has anyone experienced slighted shifted calibrations (i.e. experimenter perceives that the baby is looking at the right target but the ET maps the eye-mvt with a shift, e.g. to the right direction probably due to issue with the initial calibration)? Are there ways to correct those or shall the participants' data be discarded?
Many thanks in advance for experienced input.
I am planning to conduct a study on commuting scenarios. For the study, I need to control the tiredness (fatigue), visual attention and crowdedness in order to simulate the actual commuting scenario (e.g. by bus). I have read some articles on Stroop effect, n-back test and Go/NoGo test to control the fatigue. But I don't know how to control the visual attention and crowdedness, I couldn't find more articles on this. Any suggestions?
Hi,
I am aware of the issues and pitfalls of an online behavioral testing - i.e. collecting data in a cognitive experimental (visual search) task throughout the internet. Yet, I decided to do a little test:
I want to assess the same visual search task in a controlled lab setting and using an online platform. I am not planning to do a repeated measures setting.
In the lab setting, around 30 participants will do (based on my previous experiments).
Here's the big question: If I want to validate the online method how many participants do I need online? Shall I go with the same sample size or should I aim for as many respondents as possible?
Thanks in advance!
Andras
I want to track participants' visual attention while interacting with a robot. Thus, I need a head mounted eye tracker or glasses. Could someone please tell me which system is best to use? Thank you.
What I am interested in is a sustained task setting where we can use top down (goal- driven) and bottom up (stimulus- driven) attention tasks independently to compare between them.
Is there an actual proof that visual-spatial cues enhance early or late visual processing (as compared to uncued visual processing)?
Without saying that what is implied by this question is "true", we know that when it comes to response times (RT), peripheral (or exogenous) and central (or endogenous) cues will have a different impact (e.g., Doallo et al., 2004). However, I struggle to find any event-related potentials (ERP) study that demonstrates an enhancement of perceptual processes following a cue (preferably peripheral) when compared to "self-generated", or spontaneous, gazes (i.e., overt spatial attention).
For instance, say that you have to look out for forest fires all day long. You will probably end up doing something else to fight boredom, and hence end up looking for possible smoke from time to time.
Now the question is: Will you be able to report(RT) a smoke faster if you are spatially cued because the cue allowed you to perceive(ERP) it faster?
To summarize:
Endogenous Cue – Spontaneous = ?
Exogenous Cue – Endogenous Cue = ?
Exogenous Cue – Spontaneous = ?
Reference
Doallo, S., Lorenzo-Lopez, L., Vizoso, C., Holguı́n, S. R., Amenedo, E., Bara, S., & Cadaveira, F. (2004). The time course of the effects of central and peripheral cues on visual processing: an event-related potentials study. Clinical Neurophysiology, 115(1), 199-210.
I'm looking for a simple MOT to assess selective visual attention in a group of video gamers. The test should be validated for this measure and free to use.
I'm essentially looking for something exactly like the link below, but that outputs raw data.
One can measure almost everything as far as the vital functions are concern. E.G. by plethotysmography we can measure the vascular tonus and ANS - CNS coupling. EEG- ECH coupling as mesyred by coherence can reveal how much appreciation we have to one another. By EEG and MEG we can mesure the many brain functions for example the visual attention.
Going to sensors in t-shirt we can estimate stroke volume and respiration rate. Is it enough to say that using oculogarphy we can say what is being attended to ?
Can we say that someone is capable to fly F-15 in a safe mode ?
Jerzy
PS. Flight safety is my profession.
If we save one live its like we had saved the whole world. Flight safety ?
My collaborators and I are putting together a symposium for Psychonomics 2017 and have some findings that go against the grain (we don't find an attentional bias to select faces across several studies). We are looking for other presenters for this symposium, so if you have data that is relevant that you'd like to present with us please email me directly - ebirming@sfu.ca
Does anybody have the shapefile of MSFD subregions (http://celticseaspartnership.eu/wp-content/uploads/2014/08/CSP_European-Map_no-grid-page-001.770pxw.jpg)?
Thank you!
Tomaso
Hi! We want to investigate eye movements in patients with neurodegenerative diseases in Russian cohort. To make it comparable we kindly ask you to share some of your basic study protocols (in Presentation or other soft)!
The general data we found range between 10 - 20 min in young and healthy individuals, but the more accurate data relative to this tasks (visual attention) will be great.
Is there any study which investigates whether the use of mobile phone while driving impairs involuntary attentional responses? Involuntary attention may be fundamental while driving, as it may allow drivers to quickly direct attentional resources to a sudden change within the environment.
I have an allegedly operational ASL Eye-trac 6000 eye tracker but not the control software for it. Does anyone have a copy?
The dual-task will comprise of a simultaneous auditory digit span [DS] and a visual response time [RT] test. Participants will hear a series of five, seven and nine-digit sequences that are presented in a random order, but will remain consistent for each participant. Participants will be asked to verbally repeat back the digits 2-s after the onset of the final digit in the correct order, with a time-limit of 1-s per digit to recall the sequence, and awarded 2 points for every correct digit recalled in the correct place and 1 point for each correct digit in the incorrect location. 1 point is deducted for each digit recalled that was not in the original sequence. The accuracy score is then converted into a percentage. Whilst engaging the auditory task, participants will simultaneously partake in a visual RT test which involves an image of a small football being randomly presented for 200ms on a white background in one of four quadrants of a computer screen. Participants will be required to press an allocated button with their dominant hand in response to the stimuli as quickly as possible. Images will be presented in pseudorandom interludes between the words “ready” and “go” on the DS test, and will appear 750-1,000ms prior to the onset of the next auditory stimulus. During each test, 96 footballs will be presented and reaction time score will be recorded as the average time (ms) taken to respond to the stimuli across these trials.
Any help would be much appreciated! So far I have four slides with the football image in each quadrant... so I've got a long way to go!
distance from the fixation gaze can affect attentional response in human attentional control system.is there any mathematical function to clarify it?
Looking for some advice on the use of eye-tracking glasses on toddlers (18 months+). We are interested in the TOBII glasses but are also open to recommendations if experienced users have a more kid friendly product in mind. We're interested in testing in rich environments (science museums, labs, classrooms) and for use in human robot interaction and imitation tasks (amongst other potential uses).
Any advice on whether commercially available eye trackers can deal with little heads would be very appreciated!
Thanks!
I know some studies which show that stimuli considered as phylogenetically fear-relevant, such as snakes or spiders, benefit of a greater attentional capture than other stimuli, even for babies. I would like to know if some studies have investigated this effect with stimuli considered as ontogenetically fear-relevant, such as guns, in a young population without experience of those stimuli (babies ?).
I am interested in whether I should expect different interference effects (both behaviorally and in the EEG) when using 2 flankers surrounding the target compared to 4 flankers. I have found papers on different types of stimuli (arrows versus letters and so on), but I had no luck regarding the number of flankers presented.
Does anyone know of such a study, or has experience in presenting different number of flankers?
In many visual experiments, subjects are asked to memorize specific objects in the scene and then are reported to be tested with a post memory test. No further details about what kind of memory test is being conducted, and how it is carried out to measure participants memorization.
Any examples of post memory tests? Any advice?
Given an input image, I would like to know if some computational method can be used to extract the pre-attentive feature or a sort of early visual attention map.
In the Emotion field, it is usually assumed that negative information is selected more efficiently than positive information. It is reflected in dot-probe tasks with a higher validity effect for negative information than for positive information.
I was just wondering, when you have a validity effect that is higher in one condition (not necessarely in the Emotion field), does it reflect a selection that is more frequent, or a selection that is stronger ?
If I rephrase, concerning the higher validity effect for negative information, does it mean that negative information are selected in more trials than positive information ? Or that negative and positive information are selected with the same frequency, but that the validity effect is stronger for negative information ?
Does anybody works or knows some studies that have investigated the interaction between the simon effect AND the validity effect of a spatial cueing task ?
I am working with eye tracking system. I have a (raw) data of XY-coordinate points on screen that my eye looked at. Say, I use a pdf file on the screen. I want to find which word was there in a particular coordinate point. I am using Ghostscript for this purpose but I am not sure how correct it is.
Can anyone please give me suggestion to do this? or suggest me with some interpreters that can find a particular word in a particular position on screen?
I am working with eye movement metrics to infer visual task based on observed eye movement patterns.
One of these metrics is total scanpath length. Based on what I have read, I need to find the length of movement (saccade) between consecutive fixations. I assumed that this can be easily calculated using Euclidean distance in pixels, then converting it into degree of visual angle. Is my assumption correct?
When I searched how a scanpath is computed, I encountered some programs, where the scanpath is compressed and also similarity between scanpaths is calculated. Do I need to compress a scanpath (by removing repetitive sequence) in order to correctly calculate its length? For what purpose the similarity between two scanpaths is used?
Are they any additional criteria that I should also consider?
the protocol is somewhat like that- 2 set size was applied (12 and 16) in each set size 1 is target and remaining are the distractors. Total sets of trials 16- 8 with target present and 8 without target. I have used the eye tracking technique to assess the visuospatial attention while performing the visual search. What element of the eye tracking data will be useful to interpret the scenario ?
Hey,
I am looking for research on spatial memory, but not on the scale of cities or buildings (spatial navigation), but rather a small scale spatial memory of objects in a room, hall or even just on a workbench.
Preferably done in a real world setup and not on desktop. Possibly also done in VR (rather on HMD than just desktop VR. could also be CAVEs).
I'm analysing euphemisms in commercials. Commercials are multimodal. I analyse text/speech because it is relevant to me. But I also take into account the video, trying to make inferences about what I can call "visual euphemisms". I need to explain why I wanted to look at the video too.
Then, can I say that I'm analysing the video feature "due to its expressiveness and attentional relevance."?
Meaning that: while a magazine article can include images but they're neither extremely relevant to the message, neither they take my entire attention; in a commercial I really really put a lot of effort in looking at the video, so it becomes the focus of my attention?
I am looking for any methodology on how to find an influence of road advertisements (advertisements along the road) on driver's behavior. We are about to buy an eye-tracker and have no experience in this kind of research. Can anyone suggest me where to find any research on this topic?
Supposing that if a pp presentation will contain few information, people will pay attention and will be more quiet than with a full pp presentation with many informations. Thought about the cognitive overload, double task and multisensorial information that could decrease attention... Do you know any paper about this subject ?
I'm running a fairly simple pupillometry experiment, with fixation preceding stimulus to capture the baseline. I've been able to capture differences between my two conditions during the stimulus (for long presentations) and after stimulus (for short presentations)...but I have differences between conditions in the fixation before the stimulus even comes on. What could possibly be causing this? Trials are completely randomized, and yet these differences are consistent across participants.
Would love to ask several questions about some attention bias measuring tools - VDP, Modified food Stroop, one-back visual recognition task, food ANT (attention network test) and other techniques. Anyone has any experience with any of these?
Many thanks!
Vicki
does anyone have the color-shape task (e-prime) and may assist with the matter?
much apreciated and thanks in advance!!
I am working on the detection of creatures in the benthic environment. Data set of the 5000 images is available. Images are pretty much high resolution and low noisy. I have read some of the papers having detection rate of maximum 85% accurate, whereas we have a target of 95%(90% in worst case). Suggest some state of art techniques that could help me in my research. (except deep learning please.)
In many visual attention and visual working memory tasks, separate groups have distinct methods to set the time when a cue presents , prior to stimulus appearing(pre-cue), or after stimulus display(post-cue).
What are the significances for us to manipulate this? How could cue occurrence time influence one experiment?
Has anyone had problems of synchrony between an eye-tracking system and the stimulus delivery software sending log messages to the eye-tracking system logs? And if so, what the possible sources of the problem could be, how to avoid the problem, and how to deal with it?
We have an MR-compatible eye tracker from MR Technologies hooked onto Arrington Research's ViewPoint EyeTracker software on one PC. On a different but connected PC, Neurobehavioral Systems Presentation software controls stimulus delivery. I have Presentation communicate to the eye tracking software's logs when my video stimulus starts and ends because I was told it was more reliable to manually start the eye tracking system rather than trying to control it through commands triggered from Presentation. As far as I understand, the eye-tracking system logs a line of data every 33ms even when there's tracking loss. I expect that I should have the same number of eye-tracking data lines between my video log markers -- so if my videos are 33 fps, I assume I should have the same number of eye data points as frames for a given video -- is that correct?
However, the eye-tracking data corresponding to a video is on the order of up to 3 seconds (1-80 data points) longer than the video. For example, for a random video and according to the Presentation log files for some random 2 subjects:
video X: 102 frames (25fps) = 4.1 sec length of video (according to Presentation log files and video)
sub1: 182 lines eye data @ 30fps = 6 sec of data marked as recorded during the length of video
sub2: 166 lines eye data @ 30fps = 5.5 sec
I am very hesitant to assume that the "video start" log marker I had Presentation send to the eye-tracker system log files really corresponds to when the video started (and then take only as much eye data as video length, ignoring the "video end" log marker -- or could this be a safe assumption?
Thanks in advance for any help and explanations!
I am trying to understand whether there is a correlation between how drivers' visual scanning of the road and their behaviour after they encounter a potential collision scenario. I have been working with Percentage Road Centre (PRC; Victor, 2005), but this gives me 5 individual measures. I was wondering if there was some established model, ratio or coefficient to assess the spread of eye fixations within a given time frame.
Thanks in advance!
Victor, T. W. (2005). Keeping eye and mind on the road. Doctoral thesis, Uppsala University, Uppsala.
Hi
I have some experience with the EyeTribe(ET) and I wonder whether anybody has similar problems as I do/did, so, my questions are:
How many times do you repeat the calibration until you have a reliable one?
How long are your sessions if you use it in user studies? or
How long can you collect data with the ET? Does it automatically shut down after a while?
What is the viewing distance?
Do you use any extra tool to stabilize user's head or to preserve the calibration and the viewing distance?
Do you use ET more for post analysis or for real-time interaction?
Thanks
Which questionnaire is used to measure the practice level of Focused Attention Meditation?
I made a study with two types of targets (3 shematic faces or 3 schematic houses). The task where participants had to detect the 3 schematic houses and ignore distractors (photo of houses or faces) seems to be more difficult (reaction time bigger, more false alarms) than the task where participants had to detect the 3 shematic faces.
As the heterogeneity of the 3 schematic houses is more pronounced than the heterogeneity of the 3 schematic faces, i wanted to know if some studies have investigated the impact of the heterogeneity of the target on attentional capture by distractors ?
We want to compare the encoding/retrieval of three different types of scenes in an fMRI study. To make sure that any differences are not due to the overall complexity of the stimuli, we would like to equate our pictures for complexity. What is the best way to do this?
We are working on a mobile system/app which monitors user interaction (such as screen unlock and attentiveness to notifications) and I was wondering if there are any suitable functions measure attention taking in account reaction-time switching costs and so on. we are not intending to use any physiological or brain sensors. All measurements will be based on user interaction.
I have an eye-tracker dataset that includes pupil size information. The data was recorded primarily for examining eye movements and fixations, but I am interested in looking into whether the pupil size data says anything interesting about cognitive effort during a peripheral detection task. However, I am relatively new to pupillometry and having read some of the literature around pupil dilation and cognitive effort/attention, I can't identify a standard approach to cleaning and analysing a pupil size dataset (e.g. how to smooth the data, deal with blinks or missing data, how to identify outlying datapoints etc.). Is there such a standard approach or method?
Or are general tests for dementia administered such as the MMSE and MoCA? I am interested in all tests that measure cognitive function, but particularly visual attention.
I believe some metrics related to the eye, such as pupil dilation, may give an indication of the extent to which something being looked at is being actively processed. However, I am interested in ways to determine whether someone is paying attention to (cognitively processing) what they are looking at in natural, real-world conditions, where changing light levels may make it difficult to use pupil dilation as a measure. I am therefore wondering if there are any tell-tale signs from eye movements that can reveal whether something is being actively processed and has some cognitive importance to the observer.
For example, research on inattentional blindness shows that just because something in our environment is fixated does not mean it is perceived or processed. Also, research has been carried out about mind-wandering during reading which suggests eye movements may be qualitatively different during periods of mind-wandering compared with when what is being read is being processed. Are there any similar findings for natural situations such as just walking through an environment?
With a discrimination task I know how to work out whether performance for a particular participant is better than chance or not. This is just done by computing the observed accuracy with the expected frequency of correct responses based on chance (dependent on the number of response options in the task) with the number of observed correct responses.
However with a signal-detection task I am not sure what to do. How does one determine if, for instance, a d-prime of 0.04 indicates responding that is better than would be expected by chance or not? I don't think that one can use the normal binomial formula (as with a simple 2AFC discrimination task) because the number of given present and absent trials given is not necessarily equal in a detection tasks; the expected frequencies of a random responder would also presumably differ depending on their level of bias towards 'yes' or 'no'. So the whole thing is far more complicated. Presumably this is a fairly common issue with detection tasks-so there must be a formula somewhere to deal with it..
I would like to know if anyone participated in a study, with a group of people, exposing to a set of IAPS stimuli and recording several physiological measurements. I would like to share experiences.
It is known that attention is important for the maintenance of accurate smooth pursuit of a moving object so I think that if the pursuit velocity is increased the attention level should also increase, but I am not 100% sure about it and I would love to get some feedback and suggestions of papers confirming one of these possibilities.
Dear All, I am applying the principles of human attention (visual stimulation) to machine vision, so it would be great if you help in answering this Question,
how is human vision attracted to an object (region) in an image?:
1- the object is irregular compared to the background.
2- the object is in the middle of the image,
3- the object is with high contrast,
4- the rarity in colour,
5- Larger in size
6- others: please state it.
thanks for your cooperation.
I need a measure which results in 0 or 100 if 2 RGB images being tested are same, and give different value if there is some dissimilarity in them. The number should represent mutual information.
We are considering to obtain a Jazz eye tracker to be used in conjunction with our existing BioSemi EEG amplifiers. I would be very interested in hearing what experiences other labs have made with such a setup. How convenient and reliable is it to use? How much calibration is necessary? Does the head-mounted eye tracker cause artifacts in the EEG? Is it comfortable for the participant to wear? Is there an easy way to feed back the eye movement data to the stimulation PC during the experiment (e.g. using the Psychophysics Toolbox) or is it suitable mainly for offline analysis together with the EEG data?
memory and comprehension
memory and pheme awareness
memory and visual attention
memory and rapid automatized naming
We recently bought an iCub2 with enhanced communication abilities (see Nina below). We are currently working on visual attention and try to characterize the perception of the robot's gaze direction by human observers. We got surprised! The morphology of robotic eyes with no deformation of the eyelids and palpebral commissure strongly biases the estimation of eyes direction as soon as the gaze is averted.
Are you aware of any study (similar to what SAMER AL MOUBAYED and KTH colleagues have done with Furhat) on robots?
Thank you in advance for your help!
I am interested to analyze large volume of images in terms of early visual perception. For this reason I am interested in analyzing fixation points (the tracked human being fixation points for a given image) with respect to visual saliency analysis approach.
The visual system is perhaps the best understood visual system of mammalian brain. However, one question has always bothered me...
How can our perception of the visual world (e.g. flowers in a pot, a deck railing that is horizontal, trees in the distance) all appear to be solid and stationary while our gaze (and presumably our entire representation of the visual world in area 17 of visual cortex) is fluctuating wildly in response to eye and head movements? Wouldn't this require re-mapping of the visual world on the neocortex with every saccade and head movement?
Furthermore, how can we clearly discern movement of single objects within our visual environment (a flying bird in our peripheral vision) when the whole visual world is gyrating with every saccade and rotation of our head? Is this a simple problem that I somehow just didn't hear the answer to?
I'm interesing in how attention select and shift when children read the picture books.
global/local paradigm, emotional Stroop
Would RT measurement be too imprecise/variable using a touchscreen repsonse compared to a simple button press? I aim to measure attentional biases to/from emotional faces.
We are currently conducting a business psychology project. My group has the following topic: 'The relationship between death-salience and visual attention'.
Within the sector of the terror management theory (TMT), we are specifically researching whether there is a relationship between "threatening images" and human visual attention. In other experiments a relationship between images of physical-injury and the gaze-duration (Journal of Experimental Social Psychology, „Looking away from death: Defensive attention as a form of terror management by Gilad Hirschberger, Tsachi Ein-Dor, Avi Caspi, Yossi Arzouan, Ari Z. Zivotofsky).
We are planning to use an Eye-tracker and see if there is a relationship between threatening images and visual attention (gaze-duration, focus, focus-time, etc.). The participants will be subliminally primed in advance, and afterwards exposed to a set of images. Within these sets there will be neutral images mixed with one threatening image (5 neutral, 1 threatening). The images will be generally similar regarding colours, saliences, etc. but only one will represent a threat. The threatening images will be tested in advance so we have validated threatening images.
As we are still preparing our experiment I had the thought of getting some feedback in advance, as our group has certain points we are still researching and are not quite sure about.
Questions:
1.) As described above and researched within TMT, we assume we are working in the proximal sector. But as we are not quite sure about the actual border between the proximal and distal approach, we would like to get some feedback.
2.) Looking for threatening images I have found several databases offering 'license-free' usage, but I am not sure about that. Does anybody have any experience with a free database online for researching images in experimental contexts? Or any other idea how/where to get license-free images for the mentioned experiment (besides shooting them ourselves)?
3.) We are planning to have 30 participants in the experimental group and 30 in the control group. We might have to cut down to 20 in each group, if we have trouble getting participants involved. What you would consider a minimum amount of participants?
4.) If we can confirm the relationship , we are still looking for practical applications of our data. We assume it will be usable in a marketing context with ads, commercials, etc. But we also think it might be useful in the context of traffic psychology (warning signs, etc.). Anyway, we want to keep this point open as we still need to do the actual research. Even though we would like to hear other ideas of practical usage of the outcome of our experiment.
5.) We are planning to use sets of 6 images with a division of the images on the screen surface, without offering an automatic centering or other effects, which might be aroused. So our concrete idea is to use 2x rows of 3 images. Here we need feedback about the arrangement of the images, etc.
6.) The experiment mentioned above, from the Journal of Experimental Social Psychology, used subliminal priming methods. We are planning to also use these, as we think the risk of an obvious death-priming should be reduced. Here we are having trouble of finding validations for subliminal priming. Any feedback here would be helpful.
7.) Our last question would be: We think using black and white pictures might be helpful in order to avoid salience-effects regarding the colours (e.g. focusing on red, etc.). Does anyone have ideas here?
Thank you very much in advance for your feedback!
Many scientists believe such information is deleted by attentive processes, but there is no proof for that.
Is there any EEG database for distraction or attention in driving? I've searched for nearly one week but find none.
This is a clinical study examining how the delivery of visual information format - static, video, presented in person - may affect adolescent memory. fMRI scans would be taken as subjects recalled the type, volume, and temporal memories associated with presentation and calculation of skill-testing questions delivered four weeks prior. The subjects would be focused on the calculations they were to perform, while the researchers would be examining if there are different neural substrates elicited as the person recalling the testing event verbalizes what they can recall.
To the best of our ability, we have not been able to find research that explains what, if any, effect the medium of visual information plays in adolescent memory. If there are folks out there who have something that we should be aware of, we would be grateful.
I am conducting a study that will present participants with categories of stimuli to determine the influence of stimulus valence on attentional processes. To avoid low level confounds, it is important that the stimuli be matched on physical attributes, such as luminance, contrast, and spatial frequency. Any advice for yielding these indices would be very much appreciated.