Science topic
Visual Processing - Science topic
Explore the latest questions and answers in Visual Processing, and find Visual Processing experts.
Questions related to Visual Processing
I want to check if I can use GFP mice for IF quantification and look for proteins using secondaries that wouldn't excite GFP? (568 or 532)? Would GFP signal interfere with my staining and visualization process?
I am looking for sotware(s) that can allow anyone to visualize process models & sub-processes, allowing to share them in a collaborative way . It can be a mix of tools that can be combined to provide an intuitive platform to work within a kind of process framework management system.
I am looking for a tutorial or course to learn how to open, visualize , process hyperspectral images in .hdr , .raw format using python. Any recommendation for a tutorial is appreciated ?
Thank you in advance
I calculated charge transfer taking place within a molecule adsorbed system and visualized the process using charge density difference (CDD) plot. I need to calculate the threshold value of charge transfer for the adsorption to occur.
Can anyone explain me how to establish a relationship between adsorption potential and charge transfer and subsequently how to calculate threshold value of charge transfer using this relationship ?
Hi,
I will be running an experiment that requires participants to distinguish between several novel objects. Ideally, each novel object will be a configuration of 3D geometric shapes (e.g., pyramids, pentagonal-prisms, spirals, discs, cuboids) but objects *cannot* be distinguished from one another based on one particular local feature: the only defining aspect of an object should be its overall configuration.
For example, if we have object A (a configuration of a cuboid, cylinder, and a pyramid) for each of its features there will be at least one other novel object that contains the identical feature (e.g., object B might have the identical cuboid, object C might have the identical pyramid, and so on…) - and thus the objects cannot be differentiated based on local features, and must be differentiated by overall configuration instead. So I’m looking for a stimuli set where features have been manipulated systematically such that objects can be distinguished only by their configuration of features (something corresponding to the linked table would be ideal):
Has such a stimuli set has been used in the past, and if so has it been made available? Any suggestions welcome.
Ryan
In the first place, this question is not about encoding, or feature representation. We are aware of the important role given to object features in various visual scenarios. It covers a wide spectrum in vision that at least includes, salience maps, object relevance, eye movements, attention process, fixation, etc. It is really a hard problem!! of not encountering some form of feature-based visual model. Although, with exception of our recent research proposal, Entropy Driven Deep Learning, attempting to introduce an alternative view. Let us, get back to the challenging dilemma of characterizing object features, thus enable to evaluate its contribution to system objective(s). Hence, we may characterize features by the contribution. Then, other immediate problem arise that concern how to encode feature contributions and characterize features as separate entities! Well, it seems the feature evaluation is a problem especially in proactive vision. As well as higher cognitive visual processes such as perceptual organization.We are addressing the problem in our ongoing research proposal. Your response to the posed problem will help many research works including ours.
Does colouring in and using templates kill creativity?
In my opinion creativity needs to be let loose and experimented widely not constricted within lines.
"His relative visual spatial strength, as compared to working memory, indicate that although he shows skill when processing visual information, he may experience difficulty making distinctions between the visual information that he previously viewed and the visual information that he is currently viewing."
Please add an example. Thank you
I have problems with Arduino AD8232 sensor. I use classic Arduino 1.8.2 for Arduino and Processing for visual processing of the data. According to the graph Arduino sends data in at least 100-300Hz range (graph between two R peaks (cca1 sec) is pretty precise), but when I export raw data I get a value for every approx. 17 miliseconds (around 60 Hz). I don't see why I can't get more precise raw data since more precise raw data is clearly used for the graph estimation. Are there any hardware or software obstacles I am not aware of?
Thanks!
Teachers and educators often reference old works by philosophers, psychologists, and social scientists to support their theories and practices in education. But, almost never do I see references to the modern science of the brain regarding how we humans learn.
Some problems with the older works mentioned are: They may be based on an individual projecting his experiences onto the whole population; Or, they may be based on a small sample size of experimental subjects; Or they may be based on a skewed sampling regarding economic class, cultural biases, linguistic biases, etc.
On the other hand, if one has available a valid model of how the brain learns. This is universal. It is an intrinsic explanation of how we humans learn. It applies to everyone, without biases and without statistical uncertainties.
With such knowledge, we educators and teachers can, with confidence, devise teaching methods that can be easily adapted to the needs of our students.
More educators and teachers need to invest the time and small effort to become familiar with how our brains deal with the two extremes of processing novelties and processing routines; and then how the learning occurs as we transition from novelty to routine processing.
I am interested in literature on image perception when presented in series or on how to relate different images or how to assemble a small number of images together.
Hi everyone!
My project focuses on stimulation of primary visual cortex of macaque monkeys. The goal is to evoke phosphenes, location of which the monkey has to report by making a saccade into the direction of percept. When I show blob-like visual stimuli of varying shape, contrast and color on the screen as visual stimuli, the monkey makes saccades precisely to them. However, when I apply electrical stimulation, the monkey responds by looking somewhere, in different directions, not related to the receptive field of the stimulated cortical site. He definitely "feels" the stimulation, because he doesn't react with saccades to the trials where neither visual nor electrical stimulation is present.
How would you interpret such results, and infer on the presence and location of evoked phosphenes?
Looking forward to your ideas!
Hi,
I am interested in how subject performance are affected by the spatial frequency of the stimulus in a 2-alternative forced choice orientation discrimination task.
So far, I have found this old paper that is quite relevant in answering to this question <Burr, D. C. & Wijesundra, S.-A. Orientation discrimination depends on spatial frequency. Vision Res. 31, 1449–1452 (1991).> (I have attached one of the main figures). Interestingly, increasing the spatial frequency of the stimulus with respect to an ""optimal"" one, rapidly decreases subjects performance, while they make a pretty good job in discriminating low spatial frequencies..
I would be very grateful if you could suggest me any other relevant paper concerning this issue.
Thanks
The general data we found range between 10 - 20 min in young and healthy individuals, but the more accurate data relative to this tasks (visual attention) will be great.
Hi there. I happened to read that "infrared reflction eye-tracking tecnique cannot be used with newborn babies". Though I agree with this statement, I could not find a proper reference nor an explanation.
Intuitively, I think of at least three facts that prevent good calibration with newborns: 1) Newborns are incapable of tracking moving objects 2) N. have low sensitivity to light sources 3) N. can only see a 25 cm away.
However, I did not find any confirmations to my speculations in relevant methodological papers. Thanks to anyone who will answer and indicate any references!
distance from the fixation gaze can affect attentional response in human attentional control system.is there any mathematical function to clarify it?
I am looking for articles about visual attention tasks, such as: "Rapid Visual Information Processing (RVP)", "Choice Reaction Time (CRT)" , "Reaction Time (RTI)" using EEG or MEG.
Why are we unable to visualize higher dimensional space? Is there any special feature or structure in the visuo-spatial area of our brain that limits our perception of the world in only 3-dimensions? This is a question for neurobiologists.
The many hypotheses of Cott and Thayer all seem to fall under the assumption that predators recognize prey by the shape or outline. Has there been any behavioral studies to support that claim? or is it just assumed to be true?
hello all
is there any difference or similarities between human eye and camera in terms of noise level because both use same technique but camera captures noise while eye doesn't,is there anyway to create artificial noise so that only camera captures it but not the eye ?
In Hubel's book the eye and many other publications, it was confirmed that V1 layer 4 mainly contained (among others) simple cells that are sensitive to oriented edges appearing at certain location in the visual field .
Why then does stimulation of this layer gives rise to bright spots and not oriented edges/lines?
I am a psychotherapist looking to learn more about the visual field and the brain as I am a trainer and clinician in Brainspotting Therapy. We use a single fixed gaze point resonant with traumatic distress and get amazing positive results with clients. Looking to find ways to learn more about the brain and vision to design a study for my PhD.
When we use a video of flashing light in the headturn preference procedure during familiarization and test, infants fail to discriminate our target words. However, when we use a more engaging visual stimuli, a video of spinning colorful pinwheel in HTPP, they succesfully discriminate the target words. It sounds like using a more engaging stimuli reinforces infants' discrimination at test. Why does it not detract from processing of the sounds?
The problem of final integration has to do with how the brain binds visual information. What kinds of problems does the problem of final visual integration present in our understanding of how the brain works?
Reading the introduction to:
Incidental memory for parts of scenes from eye movements
Jenn H Olejarczyk · Steven G Luke · John M Henderson ·
I stumbled, as I always do, at a very standard phrasing, which referred to the eyes 'taking in' 'visual information'. At some visceral level, simply can not accept this formulation. Do the eyes 'take in'? What is 'visual information'?
To be clear : my question is about axiomatic assumptions and paradigms that define the way we think. For the last sixty years or so, the cognitivist- computationalist paradigm has been the dominant explanation of human cognition - at least in the academy. This paradigm, as we know, is based on analogies to reasoning machines. And while there is abounding evidence that the brain is not a computer and scant evidence that it is, we still use electro-industrial metaphors of input and output, and of thinking as internal reasoning on mental representations.
I am not persuaded by this. I feel that enactivist and Gibsonian approaches get closer to a fair description of what is really going on. These descriptions are as almost incomprehensible to cognitivists, as fundamental ideas in these paradigms are incommensurable.
Could it be that we find the brain mysterious in part because we apply inappropriate structuring metaphors which confound our inquiry?
If it exists, which is a good behavioral parameter, even indirect, to do this?
I'm interested in starting a conversation with vision and thought science. The occipital cortex is extremely complex. And i understand the boundaries between different levels of analysis. But i believe that they should still communicate. So, my question refers to models of predictive coding in early stages of visual processing and their relation to later stages, such as belief in successful outcomes. My confusion arises from contrasting different literature on the influences of expectancy biases.
I'm interested in designing color-choice experiments that incorporate different target and background colors. Thanks!
Can anybody point me to some known and well acknowledged human image recognition models? Something that answers the question: How images are recognized by humans?
Neuroscience is not my field, but I'd like to get something similar to Interactive Activation Model (which applies to words).
EDIT: I add some details. I'm interested also in how people look at images to extract relevant contents. If I show a picture to a man/woman, will he/she look at all elements in the picture in the same way? Will he/she skip certain parts or will he/she focus on others? And is it the same if I show a man/woman a picture of a landscape or a picture of fish sticks package?
I believe some metrics related to the eye, such as pupil dilation, may give an indication of the extent to which something being looked at is being actively processed. However, I am interested in ways to determine whether someone is paying attention to (cognitively processing) what they are looking at in natural, real-world conditions, where changing light levels may make it difficult to use pupil dilation as a measure. I am therefore wondering if there are any tell-tale signs from eye movements that can reveal whether something is being actively processed and has some cognitive importance to the observer.
For example, research on inattentional blindness shows that just because something in our environment is fixated does not mean it is perceived or processed. Also, research has been carried out about mind-wandering during reading which suggests eye movements may be qualitatively different during periods of mind-wandering compared with when what is being read is being processed. Are there any similar findings for natural situations such as just walking through an environment?
We know, for example, that some action games and mathematics can be capable of developing these skills. What other activities can?
What is the relationship between visual perception and stress, anxiety? What is the mechanism visual perception causes the these effects?
How do we perceive certain instructions and perform it. What all series of action runs through our eyes, brain and other body parts? How things get decoded once seen on piece of paper?
Specifically, if we use an eye-tracking device to record the spontaneous viewing of an observer, what kind of evidence might indicate successful perceptual grouping (i.e., by Gestalt laws of similarity, proximity, continuation etc...) versus a no grouping? I will appreciate if anyone can point out a few good reference papers.
The child's explanation for it's (very clever!!) question: Wouldn't it be faster when visual stimuli were processed in the frontal lobe? (shorter path).
I got this question during a talk about the brain at a Junior Science Café meeting. Does anyone know a good, child friendly answer?
Based on the “Object-Spatial-Verbal Cognitive Style Model” from Kozhevnikov, Kosslyn and Shephard (2005), Are there any available instruments to measure the visuospatial dimensions (Object imagery – ability to mentally represent object details like form, color, etc; and Spatial Imagery – ability to mentally represent and manipulate spatial relations between objects and its parts) separately?
Any tips will be welcome, thank you!