Science topic

Visual Processing - Science topic

Explore the latest questions and answers in Visual Processing, and find Visual Processing experts.
Questions related to Visual Processing
  • asked a question related to Visual Processing
Question
2 answers
I want to check if I can use GFP mice for IF quantification and look for proteins using secondaries that wouldn't excite GFP? (568 or 532)? Would GFP signal interfere with my staining and visualization process?
Relevant answer
Answer
Yes, it is doable using 2ries that less overlaping as suggested. But be aware that fixative can quench GFP but if its the eGGP shouldnt be a problem. And remember to do all of staining process away from the light not only after 2ry incubation not to lose GFP signal. Good luck!
  • asked a question related to Visual Processing
Question
5 answers
I am looking for sotware(s) that can allow anyone to visualize process models & sub-processes, allowing to share them in a collaborative way . It can be a mix of tools that can be combined to provide an intuitive platform to work within a kind of process framework management system.
Relevant answer
Answer
Much chemical process modelling and/or simulation work can be done with the widely-used Microsoft’s Excel.
Application example ― Simulations carried in Excel 5.0 with Visual Basic for Applications (VBA) macros ― The recursive least squares algorithm (RLS) allows for (real-time) dynamical application of least squares regression to time series. Years ago, while investigating adaptive control and energetic optimization of aerobic fermenters, I have applied the RLS algorithm with forgetting factor (RLS-FF) to estimate the parameters from the KLa correlation, used to predict the O2 gas-liquid mass-transfer, while giving increased weight to most recent data. Estimates were improved by imposing sinusoidal disturbance to air flow and agitation speed (manipulated variables). The power dissipated by agitation was accessed by a torque meter. The proposed (adaptive) control algorithm compared favourably with PID. This investigation was reported at (MSc Thesis):
  • asked a question related to Visual Processing
Question
3 answers
I am looking for a tutorial or course to learn how to open, visualize , process hyperspectral images in .hdr , .raw format using python. Any recommendation for a tutorial is appreciated ?
Thank you in advance
Relevant answer
Answer
The module 'spectral' can do that. Have a look at http://www.spectralpython.net/
  • asked a question related to Visual Processing
Question
2 answers
I calculated charge transfer taking place within a molecule adsorbed system and visualized the process using charge density difference (CDD) plot. I need to calculate the threshold value of charge transfer for the adsorption to occur.
Can anyone explain me how to establish a relationship between adsorption potential and charge transfer and subsequently how to calculate threshold value of charge transfer using this relationship ?
Relevant answer
Answer
Hi Mostafa Y Nassar
I am working on the adsorption of various gas molecules on Cu-doped MoS2 system.. I am asked to calculate the threshold value of charge transfer for the adsorption to occur.
I am using VASP 5.2 for the calculations. I calculated charge transfer using Bader charge analysis and used the software VESTA for charge density difference plot.
  • asked a question related to Visual Processing
Question
9 answers
Hi,
I will be running an experiment that requires participants to distinguish between several novel objects. Ideally, each novel object will be a configuration of 3D geometric shapes (e.g., pyramids, pentagonal-prisms, spirals, discs, cuboids) but objects *cannot* be distinguished from one another based on one particular local feature: the only defining aspect of an object should be its overall configuration.
For example, if we have object A (a configuration of a cuboid, cylinder, and a pyramid) for each of its features there will be at least one other novel object that contains the identical feature (e.g., object B might have the identical cuboid, object C might have the identical pyramid, and so on…) - and thus the objects cannot be differentiated based on local features, and must be differentiated by overall configuration instead. So I’m looking for a stimuli set where features have been manipulated systematically such that objects can be distinguished only by their configuration of features (something corresponding to the linked table would be ideal):
Has such a stimuli set has been used in the past, and if so has it been made available? Any suggestions welcome.
Ryan
Relevant answer
Answer
Hi Ryan
Take a look at Gauthier T, Tarr MJ Becoming a Greeble Expert etc..., Vision Research 1997, 37, pp 1673-1681.
It may help.
Regards
Michael
  • asked a question related to Visual Processing
Question
2 answers
In the first place, this question is not about encoding, or feature representation. We are aware of the important role given to object features in various visual scenarios. It covers a wide spectrum in vision that at least includes, salience maps, object relevance, eye movements, attention process, fixation, etc. It is really a hard problem!! of not encountering some form of feature-based visual model. Although, with exception of our recent research proposal, Entropy Driven Deep Learning, attempting to introduce an alternative view. Let us, get back to the challenging dilemma of characterizing object features, thus enable to evaluate its contribution to system objective(s). Hence, we may characterize features by the contribution. Then, other immediate problem arise that concern how to encode feature contributions and characterize features as separate entities! Well, it seems the feature evaluation is a problem especially in proactive vision. As well as higher cognitive visual processes such as perceptual organization.We are addressing the problem in our ongoing research proposal. Your response to the posed problem will help many research works including ours.
Relevant answer
Answer
Sear professor, Thanking you so much for providing us your response. Although, my question on characterization is relating to artificial intelligence view of the exogenous vision. Yet, your response enlightened me and I am sure it will help other interested researchers in this area of work. Thanking you so much, Homayoun 
  • asked a question related to Visual Processing
Question
15 answers
Does colouring in and using templates kill creativity?
In my opinion creativity needs to be let loose and experimented widely not constricted within lines.
Relevant answer
Answer
Hi, good question, I think that Creativity in a way is rule based. In Play there is the idea of Padia, which is open play, but research has found that in open play, those in the experience impose rules themselves to make it more interesting for themselves. So I think that open creativity is impossible, as people impose rules as they work though the creative endeavours, if this makes any sense. 
  • asked a question related to Visual Processing
Question
2 answers
"His relative visual spatial strength, as compared to working memory, indicate that although he shows skill when processing visual information, he may experience difficulty making distinctions between the visual information that he previously viewed and the visual information that he is currently viewing."
Please add an example.  Thank you
Relevant answer
Answer
It appears it is suggesting If shown visuospatial data and then a short time later is shown additional such information, the person may struggle to distinguish between them or use the material in a useful manner to think through a problem.  As Ryan said, the computer-generated interpretation seems nebulous. 
  • asked a question related to Visual Processing
Question
4 answers
I have problems with Arduino AD8232 sensor. I use classic Arduino 1.8.2 for Arduino and Processing for visual processing of the data. According to the graph Arduino sends data in at least 100-300Hz range (graph between two R peaks (cca1 sec) is pretty precise), but when I export raw data I get a value for every approx. 17 miliseconds (around 60 Hz). I don't see why I can't get more precise raw data since more precise raw data is clearly used for the graph estimation. Are there any hardware or software obstacles I am not aware of?
Thanks!
Relevant answer
Answer
Thanks everybody,
your suggestions are really helpful. For now I have successfully increased the data acquisition precision by bypassing analysis and using serial port terminal apps such as puTTy and CoolTerm to capture data and analyze it later.
:)
Best,
Jan
  • asked a question related to Visual Processing
Question
28 answers
Teachers and educators often reference old works by philosophers, psychologists, and social scientists to support their theories and practices in education. But, almost never do I see references to the modern science of the brain regarding how we humans learn.
Some problems with the older works mentioned are: They may be based on an individual projecting his experiences onto the whole population; Or, they may be based on a small sample size of experimental subjects; Or they may be based on a skewed sampling regarding economic class, cultural biases, linguistic biases, etc.
On the other hand, if one has available a valid model of how the brain learns. This is universal. It is an intrinsic explanation of how we humans learn. It applies to everyone, without biases and without statistical uncertainties.
With such knowledge, we educators and teachers can, with confidence, devise teaching methods that can be easily adapted to the needs of our students.
More educators and teachers need to invest the time and small effort to become familiar with how our brains deal with the two extremes of processing novelties and processing routines; and then how the learning occurs as we transition from novelty to routine processing.
Relevant answer
Answer
Dear Antonio Lucero
You righty say that teachers and educators often refer to old works by philosophers, psychologists, and social scientists to support their theories and practices in education. But, almost never -- you say -- have you seen references to the modern science of the brain regarding how we humans learn. In your wise considerations, you also refer to "a valid model of how the brain learns."  Let me say that when you advise us to look at "how the brain learns", you are assuming, if I am not wrong, that it is one's brain that learns. Of course, we need our brain to learn, to see, and even to walk. But, if we performed a conceptual or grammatical investigation a la Wittgenstein, would it not be abusive and even misleading to say that it is our brain than learns, sees, and walks? 
To my understanding, it is learners or people as a whole and immersed in a given physical and social milieu or in a certain language-game, as Wittgenstein used to say, not their brains, who learn. Of course, when we learn, several things happen in our brains, for example, in terms of electric potentials, neural firings, connections among neurons, and the like. Without any doubt, it is important to know what happens in our brains when, for example, we passively learn by rote learning or by actively reconstructing or reinventing what we learn, as Piaget used to say. More than anyone, neuroscientists are in good position to do this important job and they are actually doing it. 
Note also that important and widely acclaimed teachers, such as M. Montessori, C. Freinet, and P.Freire, were excellent teachers, and yet they knew little, if any, about one's brain functioning. I guess that many present-day excellent teachers know almost nothing of the brain's functioning. Of course, I also admit that they could be more excellent teachers than they are if they took into account neural findings, namely those related to teaching/learning.
I know that neurosciences are now much in vogue. Thanks to them, our knowledge of the brain is now much more advanced than it was in the past. It is wonderful that we can profit from such knowledge to ameliorate the way teachers teach and students learn. Even though, I think that it makes little, if any, sense to say, as many neuroscientists do (e.g., Damasio, Gazzaniga) that our brain is social, moral, ethical, pedagogical, and the like. It makes good sense to say of a person that s/he thinks or acts in a social/antisocial, moral/immoral, ethical/unethical way. It is misleading and even nonsensical, however, to say, as neuroscientists tend  to do, that our brain is social/antisocial, moral/immoral, ethical/unethical, and the like.This means that I do not follow neuroscientists when they attribute to the brain, say, predicates that are applicable to a person as a whole (e.g., moral or immoral,  knowledgeable or relatively ignorant). As teachers, scientists, and  so forth, we should always look critically at all breakthroughs whatever.
Best wishes, Orlando
  • asked a question related to Visual Processing
Question
1 answer
I am interested in literature on image perception when presented in series or on how to relate different images or how to assemble a small number of images together.
Relevant answer
Answer
Benjamin Walter (1935) The Work of Art in the age of Mechanical Reproduction. and Arthur Danto (1974) The Transfiguration of the Commonplace. 
  • asked a question related to Visual Processing
Question
8 answers
Hi everyone!
My project focuses on stimulation of primary visual cortex of macaque monkeys. The goal is to evoke phosphenes, location of which the monkey has to report by making a saccade into the direction of percept. When I show blob-like visual stimuli of varying shape, contrast and color on the screen as visual stimuli, the monkey makes saccades precisely to them. However, when I apply electrical stimulation, the monkey responds by looking somewhere, in different directions, not related to the receptive field of the stimulated cortical site. He definitely "feels" the stimulation, because he doesn't react with saccades to the trials where neither visual nor electrical stimulation is present.
How would you interpret such results, and infer on the presence and location of evoked phosphenes?
Looking forward to your ideas!
Relevant answer
Answer
Hi Serge,
1. What kind of stimulation protocol are you using? it is possible that you are activating a very large neural population and the monkey ""see"" a big phosphene not clearly spatially localized. 
2. Are you sure the electrode is not moving? Are you compering the RF measured at the beginning and at the end of the stimulation?
3. Are you sure the monkey understood the task? I guess a phosphene can be a very strange stimulus for a monkey. It could just try to make a saccade away from it.
good luck
  • asked a question related to Visual Processing
Question
3 answers
Hi,
I am interested in how subject performance are affected by the spatial frequency of the stimulus in a 2-alternative forced choice orientation discrimination task.
So far, I have found this old paper that is quite relevant in answering to this question <Burr, D. C. & Wijesundra, S.-A. Orientation discrimination depends on spatial frequency. Vision Res. 31, 1449–1452 (1991).> (I have attached one of the main figures). Interestingly, increasing the spatial frequency of the stimulus with respect to an ""optimal"" one, rapidly decreases subjects performance, while they make a pretty good job in discriminating low spatial frequencies..
I would be very grateful if you could suggest me any other relevant paper concerning this issue.
Thanks
Relevant answer
Answer
With my pleasure.
  • asked a question related to Visual Processing
Question
7 answers
The general data we found range between 10 - 20 min in young and healthy individuals, but the more accurate data relative to this tasks (visual attention) will be great.      
Relevant answer
Answer
A related control issue is sleep quantity, quality and circadian timing preceding the experiment.  We instructed subjects to sleep eight hours per night for the three nights preceding an experimental session, retiring and arising at the same times of night/day.  Also, we always tested at the same time(s) of day/night.
  • asked a question related to Visual Processing
Question
5 answers
Hi there. I happened to read that "infrared reflction eye-tracking tecnique cannot be used with newborn babies". Though I agree with this statement, I could not find a proper reference nor an explanation.
Intuitively, I think of at least three facts that prevent good calibration with newborns: 1) Newborns are incapable of tracking moving objects 2) N. have low sensitivity to light sources 3) N. can only see a 25 cm away.
However, I did not find any confirmations to my speculations in relevant methodological papers. Thanks to anyone who will answer and indicate any references!
Relevant answer
Answer
John Wattam-Bell told me once that newborns (< 1 month old) can sometimes exhibit greater corneal reflectance, which can degrade the eye-tracking signal.
To be honest though, I wasn't aware that eye-tracking *cannot* be used with newborns, and I am somewhat sceptical of that claim. That said, since neonates are seldom awake, and have extremely low acuity I can well-imagine that it would be extremely difficult to conduct an eye-tracking experiment in such a population (e.g., like you say motivating them to complete the calibration procedure alone would be very taxing..), and more passive techniques, such as ERPs, are generally preferable.
EDIT: NB: I don't think your 3 reasons are the primary ones, since they would not be unique to automated/infrared eye-tracking (and manual/human gaze-tracking in neonates is certainly feasible!)
  • asked a question related to Visual Processing
Question
4 answers
distance from  the fixation gaze can affect attentional response in human attentional control system.is there any mathematical function to clarify it?
Relevant answer
Answer
For overt shifts of attention (ie saccades), and distractor effects there is a spatial structure. These have been described geometrically (ie mapped). See  Walker et al (1997)  J Neurophysiol 78:1108-1119 and cites. Not sure about covert shifts of attention. 
  • asked a question related to Visual Processing
Question
3 answers
I am looking for articles about visual attention tasks, such as: "Rapid Visual Information Processing (RVP)", "Choice Reaction Time (CRT)" , "Reaction Time (RTI)" using EEG or MEG.
Relevant answer
Answer
I'm not sure how directly relevant, but this is a useful read for anyone thinking of using EEG: Michel CM, Murray MM (2012) Towards the utilization of EEG as a brain
imaging tool. NeuroImage doi:10.1016/j.neuroimage.2011.12.039. https://www.ncbi.nlm.nih.gov/pubmed/22227136
  • asked a question related to Visual Processing
Question
4 answers
Why are we unable to visualize higher dimensional space? Is there any special feature or structure in the visuo-spatial area of our brain that limits our perception of the world in only 3-dimensions?  This is a question for neurobiologists.
Relevant answer
Answer
In an embodied cognition perspective, it can also be explained by the fact that all our sensorimotor interactions take place in a 3D space. As a child, we learn about our environment through the coupling between our actions and their sensory consequences. It can be assumed that our higher level cognition (i.e., thinking and imagining) is grounded in these low level sensorimotor contingencies.
In the same line, the following paper proposes that "The emergence of spatial notions does not necessitate the existence of real physical space, but only requires the presence of sensorimotor invariants called 'compensable' sensory changes.". These compensable sensory changes correspond to the changes in our perception that we can compensate by some bodily actions. According to the authors, this could be the key to understanding the notion of space.
  • asked a question related to Visual Processing
Question
1 answer
The many hypotheses of Cott and Thayer all seem to fall under the assumption that predators recognize prey by the shape or outline. Has there been any behavioral studies to support that claim? or is it just assumed to be true? 
Relevant answer
Answer
That sounds too simplistic, even a bit anthropocentric. As primates we are more visual than other mammals and species of other classes. (Eagles, of course, are a notable exception to this.) Other sorts of potentially relevant stimuli for predators that come to mind are movement (in frog vision) and smell (in many species). I think you would have to consider the perceptual and cognitive systems of each species of predator to really get a handle on this, as these systems are inextricably bound to corporal morphology.
There are studies that indicate that (visually perceived) shape is one factor (among others) in predator recognition by primate prey (another kind of "flip side"). Derek Hodgson has written about the recognition of felids by hominins; check out his ResearchGate profile for publications. There is a growing body of literature on snake recognition by human and nonhuman primates. Our primate ancestors used to be a major food source for constrictors, and venomous serpents continue to cause a lot of human deaths. Some of the researchers who have looked at this are Richard Coss, Judy DeLoache, Hogshen He, Lynne Isbell, Nobuyuki Kawai, Vanessa LoBue, Arne Öhman, Michael Penkunas, and Brandon Wheeler.
  • asked a question related to Visual Processing
Question
3 answers
hello all
is there any difference or similarities between human eye and camera in terms of noise level because both use same technique but camera captures noise while eye doesn't,is there anyway to create artificial noise so that only camera captures it but not the eye ?
Relevant answer
Answer
Hi
When looking at process piping vibration, the rule of thumb is that the human eye exaggerates vibration by about a factor of 10x.
This is easy to verify, just put a marker on the vibrating object and a ruler next to the marker. I have arrived at a similar factor using also other more serious equipment such as position lasers, LVDT, YoYo pots and accelerometers.
The best explanation I have been able to come up with is that the human eye tracks motion up to about 4 Hz and in doing so, that the eye is rolling in the eye socket. This implies that scale matters, i.e. video tape and playback on a 15 inch screen and things no longer look as impressive.
It will be interesting to hear what the RG community has to say on this matter.
All the best
Claes
  • asked a question related to Visual Processing
Question
3 answers
In Hubel's book the eye and many other publications, it was confirmed that V1 layer 4 mainly contained (among others) simple cells that are sensitive to oriented edges appearing at certain location in the visual field .
Why then does stimulation of this layer gives rise to bright spots and not oriented edges/lines?
Relevant answer
Answer
Thank you all for your answers and I do share your point of view. I'll look into the presentations
Regarding the link for hubel's book:  http://hubel.med.harvard.edu/
I also think that stimulation's lack of focality may be delivering contradictory information to the brain hindering perception. For example, you may be simultaneously activating a combination of neurons that the brain does not expect to be activated together..
It needs deeper thinking..I'll try again considering your answers 
  • asked a question related to Visual Processing
Question
8 answers
I am a psychotherapist looking to learn more about the visual field and the brain as I am a trainer and clinician in Brainspotting Therapy. We use a single fixed gaze point resonant with traumatic distress and get amazing positive results with clients. Looking to find ways to learn more about the brain and vision to design a study for my PhD.
  • asked a question related to Visual Processing
Question
3 answers
When we use a video of flashing light in the headturn preference procedure during familiarization and test, infants fail to discriminate our target words. However, when we use a more engaging visual stimuli, a video of spinning colorful pinwheel in HTPP, they succesfully discriminate the target words. It sounds like using a more engaging stimuli reinforces infants' discrimination at test. Why does it not detract from processing of the sounds?
Relevant answer
Answer
I suspect it improves their interest in the task, which increases their attention to the stimuli, rather than it reinforcing their discrimination per se. It may be that the timing of those features could be where it detracts, but otherwise increasing attention will probably just be a good thing.
Hope this helps!
  • asked a question related to Visual Processing
Question
7 answers
The problem of final integration has to do with how the brain binds visual information. What kinds of problems does the problem of final visual integration present in our understanding of how the brain works?
Relevant answer
Answer
Visual impressions are converted into and stored as
tuples of symbols. Imagine the view of a cluttered
desktop: from a glance you can only keep a
working memory -ful of items. I'd like to
mention Kim's Game.
Visual working memory is restricted in capacity
to the Miller constant. On the other hand, we
don't have one working memory for auditory
and one for visual items: we only have a single
working memory for items of any modality, in
a neutral currency, the symbols.
Packets stored in long term memory can only
be recalled in a reliable way if any packet fits
entirely into working memory. Bigger data
structures can be made by linking pairs
of tuples by containing the same symbol,
like the equi-join in relational databases.
Literature
Regards,
Joachim
  • asked a question related to Visual Processing
Question
35 answers
Reading the introduction to:
Incidental memory for parts of scenes from eye movements
Jenn H Olejarczyk · Steven G Luke · John M Henderson ·
I stumbled, as I always do, at a very standard phrasing, which referred to the eyes 'taking in' 'visual information'. At some visceral level, simply can not accept this formulation. Do the eyes 'take in'? What is 'visual information'?
To be clear : my question is about axiomatic assumptions and paradigms that define the way we think.  For the last sixty years or so, the cognitivist- computationalist paradigm has been the dominant explanation of human cognition - at least in the academy. This paradigm, as we know, is based on analogies to reasoning machines. And while there is abounding evidence that the brain is not a computer and scant evidence that it is, we still use electro-industrial metaphors of input and output, and of thinking as internal reasoning on  mental representations.
I am not persuaded by this. I feel that enactivist and Gibsonian approaches get closer to a fair description of what is really going on. These descriptions are as almost incomprehensible to cognitivists, as fundamental ideas in these paradigms are incommensurable.
Could it be that we find the brain mysterious in part because we apply inappropriate structuring metaphors which confound our inquiry? 
Relevant answer
Answer
Simon, you're absolutely right to be critical and to question existing research paradigms. The problem with any new paradigm is that, at first, it may give a sense of deeper insight, but later, starts to wear off until it indeed may impede further progress. Yet, in the absence of a better paradigm, one often keeps on working with it. This is troublesome especially if its paradigmatic ideas are mistaken for the “truth” (whatever that may be). By the way, as I argued in my CogProc paper, I do not think that the cognitivist/computationalist paradigm has worn off, but I do think that it needs to connect to complementary concepts and ideas from, e.g., connectionism, dynamic systems theory, and neuroscience.
You're also absolutely right that the terminology in any paradigm carries a lot of questionable ontological bagage. This is nagging but I think also inevitable in our continuing quest for appropriate analogies, methaphors, and models by which we can only approximate the “real thing” (whatever that may be). In this sense, I am open to what is called a metaphysical (or ontological) reading of pluralism (which assumes that a "grand unifying theory" is possible), but for the moment, I adopt an explanatory (or epistemological) reading of pluralism – which, more pragmatically and in the spirit of David Marr, focuses on differences and parallels between existing explanations at different levels of description to see if and how they might be combined. My hope is that, eventually, this will lead to new and better thinking structures.
– Peter
  • asked a question related to Visual Processing
Question
9 answers
If it exists, which is a good behavioral parameter, even indirect, to do this?
Relevant answer
Answer
Hi Analisa,
yes, as Vittorio Porciatti wrote, there is a reliable way to do that. It's the Westheimer paradigm (no "r" in Westheimer), and the field size that is measured by it is called the "perceptive field size".Oehler (1985) has even used it with monkeys, and the seminal paper is by Lothar Spillmann. There is a chapter in my review on peripheral vision on it:
(or go to my website, ww.hans.strasburger.de)
  • asked a question related to Visual Processing
Question
2 answers
I'm interested in starting a conversation with vision and thought science. The occipital cortex is extremely complex. And i understand the boundaries between different levels of analysis. But i believe that they should still communicate. So, my question refers to models of predictive coding in early stages of visual processing and their relation to later stages, such as belief in successful outcomes. My confusion arises from contrasting different literature on the influences of expectancy biases. 
Relevant answer
Answer
 I would start with Libet, Benjamin; Gleason, Curtis A.; Wright, Elwood W.; Pearl, Dennis K. (1983). "Time of Conscious Intention to Act in Relation to Onset of Cerebral Activity (Readiness-Potential) - The Unconscious Initiation of a Freely Voluntary Act". Brain 106: 623–642. doi:10.1093/brain/106.3.623. PMID 6640273.
The paper does not answer your question  concerning color directly, but I would look up him on Google  for other references as well . If you find no "Libbett" experiments using color, then you have to do it yourself. In fact designing the right experiment should be straightforward.Let me know.. I may have some suggestions. In particular , you may be explore the effects of categorical versus complimentary color stimuli .
  • asked a question related to Visual Processing
Question
6 answers
I'm interested in designing color-choice experiments that incorporate different target and background colors. Thanks!
Relevant answer
Answer
One way to approach your question may  be to look at opponent ganglion cells responses (with their species specific photoreceptor inputs).  For an overview of color vision in animals I would refer you to Dr Gerald (Jerry) Jacobs book: Comparative Color Vision (1981).  He has a long career studying color vision (and pigments) in a variety of species
Here are just a few papers:
Neitz, M., Neitz, J. and G. H. Jacobs (1991) Spectral tuning of pigments underlying red-green color vision. Science, 252, 971-974.
Jacobs, G. H. (1996) Primate photopigments and primate color vision. Proceedings of the National Academy of Science USA, 93, 577-581.
Jacobs, G. H. and J. Nathans (2009) The evolution of primate color vision. Scientific American, 300 ( #4), 56-63.
Jacobs, G. H. (2009) Evolution of colour vision in mammals. Philosophical Transactions of the Royal Society B, 364, 2957-2967.
Another organism modeled well with complementary colors is the honeybee:
Backhaus W, Menzel R Color distance derived from a receptor model of color vision in the honeybee Biological Cybernetics , 55, 321-331, 1987
  • asked a question related to Visual Processing
Question
5 answers
Can anybody point me to some known and well acknowledged human image recognition models? Something that answers the question: How images are recognized by humans?
Neuroscience is not my field, but I'd like to get something similar to Interactive Activation Model (which applies to words).
EDIT: I add some details. I'm interested also in how people look at images to extract relevant contents. If I show a picture to a man/woman, will he/she look at all elements in the picture in the same way? Will he/she skip certain parts or will he/she focus on others? And is it the same if I show a man/woman a picture of a landscape or a picture of fish sticks package?
Relevant answer
Answer
Hi Stefano,
in order to have a survey of theories on image or, in narrower sense, picture perception you might see Hecht, Schwartz et Al (2003), Looking into Pictures. An Interdisciplinary Approach to Pictorial Space. It is interesting also Cutting & Massironi (1998). Pictures and their special status in perceptual and cognitive inquiry. In J. Hochberg (Ed) Perception and cognition at century’s end. New York: Academic Press, (pp. 137-168). In general in this field, like many others, there can be distinguished an inferentialist-constructivist approach from a direct or a "directed" perception approach. Another difference cutting across this distinction is that between those who take images as something similar to marks and signs and those that do not. In the first case you can have a look at Ittelson (1996), Visual Perception of Markings, Psychonomic Bulletin. In the second case, there is the experimental and theoretical tradition of phenomenology intended in the Berlin-Graz and Italian School sense, i.e. Massironi himself, Bozzi, Vicario and many others. I myself have tried to derive a theoretical model of which cognitive capacities could support picture perception and how this one could help in assessing the neurobiological models. I understand that many more issues could be at stake depending on the particular topic you are addressing to. Hope it will help
  • asked a question related to Visual Processing
Question
13 answers
I believe some metrics related to the eye, such as pupil dilation, may give an indication of the extent to which something being looked at is being actively processed. However, I am interested in ways to determine whether someone is paying attention to (cognitively processing) what they are looking at in natural, real-world conditions, where changing light levels may make it difficult to use pupil dilation as a measure. I am therefore wondering if there are any tell-tale signs from eye movements that can reveal whether something is being actively processed and has some cognitive importance to the observer.
For example, research on inattentional blindness shows that just because something in our environment is fixated does not mean it is perceived or processed. Also, research has been carried out about mind-wandering during reading which suggests eye movements may be qualitatively different during periods of mind-wandering compared with when what is being read is being processed. Are there any similar findings for natural situations such as just walking through an environment?
Relevant answer
Answer
Bear in mind that covert shifts of attention can happen in the absence of eye movements (which is why they're called covert).  See for example the introduction of the article linked below.
  • asked a question related to Visual Processing
Question
3 answers
We know, for example, that some action games and mathematics can be capable of developing these skills. What other activities can?
Relevant answer
Answer
I would say navigation, either on urban or non-urban environments, that can add some changes to the daily activity that routes us from house to work/school to shopping: walking, cycling, driving or sailing, out of / expanding the daily routine surrounding environment. And perhaps also using external imagery like maps in paper or smart devices with LBS (Location Based Services) and possibly AR (Augmented Reality), that can reference or inform the place and the corresponding bodily experience, spatial cognition and the mental imagery of the surrounding environment. If all this can be reproduced in a complete VR (Virtual Reality) environment, is an issue that is being study by Neurosciences and Design, and maybe there is not a definite answer ... 
  • asked a question related to Visual Processing
Question
17 answers
What is the relationship between visual perception and stress, anxiety? What is the mechanism visual perception causes the these effects?
Relevant answer
Answer
An interesting, free-ranging conversation. As a meditation teacher, I'm exclusively interested in dissolving and eliminating all stress, anxiety, and other negative effects that limit a person's functioning in life, that limit or prevent their natural experience of peace. love, and happiness. Nevertheless, I see the value of academic reasoning and research into all the topics that we are all raising here. While I recognize the value of research, I remain focused on directly helping people live better by eliminating their stored stresses so the nervous system can function fully and properly.
  • asked a question related to Visual Processing
Question
3 answers
How do we perceive certain instructions and perform it. What all series of action runs through our eyes, brain and other body parts? How things get decoded once seen on piece of paper?
Relevant answer
Answer
Thank you Markus Huff and B.L. William Wong for the reply. They are really helpful,  i am happy that they are helping me out to see things with different perspective. Thanks a lot.
  • asked a question related to Visual Processing
Question
19 answers
Specifically, if we use an eye-tracking device to record the spontaneous viewing of an observer, what kind of evidence might indicate successful perceptual grouping (i.e., by Gestalt laws of similarity, proximity, continuation etc...) versus a no grouping? I will appreciate if anyone can point out a few good reference papers.
Relevant answer
Answer
Hello Sarina,
You have been well served with theory, so i thought that I would take a more pragmatic, nuts and bolts approach to what I think is your problem. Am I right in thinking that you want to present visual stimuli to observers, record their spontaneous eye movements and fixations on these stimuli, and then, by analysing the distribution of eye-fixations, fixation times and perhaps scan paths, find evidence for or against what you call “grouping”? Let’s leave what we mean by “grouping” for the moment. When you have recorded the x-y coordinates of your observers’ fixations on your stimuli, their fixation durations, it is possible to look at the distribution of fixations and duration as a three-dimensional map for just one observer or cumulatively across all your observers. You can do this easily using a three-dimensional chart in Microsoft Excel, and see in detail where any observer has “grouped” his or her fixations and fixation duration on any one visual stimulus. You can also look at the accumulation of fixations and fixation times across one or many observers and/or across one or many stimuli.
Some may argue that this method of judging where “grouping” of fixations and duration on the stimuli has occurred is just a bit too subjective. Some people may see groupings in the distributions that others do not. One step towards a more objective analysis of the distributions is to use cluster analysis. There are at least a couple of cluster analyses that can be used with eye-movement data - k-means and the Wallace-Boulton Information measure. I have attached a paper explaining how to carry out the analyses described above. There are other papers on the use of cluster analysis on eye-movement data, but until we are sure that this approach is going to be useful to you, we can leave these and also treatment of scan paths aside.
There are also two ways to approach your problem. You could begin with a theory that predicts where “grouping” will occur. For example, if your stimuli were alphabetic characters, you may predict that attention would be concentrated (grouped) around the most discriminating features of the characters. You may have reason to believe that some parts of your visual stimuli are more “salient” than other parts, and will attract more attention. The described data-analysis methods can help you test these theories. A second approach is simply to have no theory and look at where attention is grouped on your stimuli, and on the basis of your results, work towards some predictive theory that could be tested on new stimuli. Note too that the distribution of attention is going to be strongly influenced by the task demands, the nature of the stimuli and instructions to the observers.
It will not have escaped your notice that I have moved from talking about eye movements and fixations to “attention” - a controversial move that could be discussed. I have also introduced the term “salience” - a real can of worms that one! And, while I understand the Gestaltist notion of grouping, I am a little unsure of what you and the other commentators mean by the same term.
  • asked a question related to Visual Processing
Question
8 answers
The child's explanation for it's (very clever!!) question: Wouldn't it be faster when visual stimuli were processed in the frontal lobe? (shorter path).
I got this question during a talk about the brain at a Junior Science Café meeting. Does anyone know a good, child friendly answer?
Relevant answer
Answer
One simple answer would be that our brains have evolved from back to front, so the back of the brain where we process visual information is very old (evolutionarily speaking) while front (where we think) is much newer. Thus animals were able to see long before they evolved to pose questions about brain organization.
  • asked a question related to Visual Processing
Question
14 answers
Based on the “Object-Spatial-Verbal Cognitive Style Model” from Kozhevnikov, Kosslyn and Shephard (2005), Are there any available instruments to measure the visuospatial dimensions (Object imagery – ability to mentally represent object details like form, color, etc; and Spatial Imagery – ability to mentally represent and manipulate spatial relations between objects and its parts) separately?
Any tips will be welcome, thank you!
Relevant answer
Answer
Tiago, the MARMI and the MASMI are in Research Gate. I am sending you the MASMI and a article. If you need more information, please tell me it.
Bye