ArticlePDF Available

Abstract

Much of what we know about human colour perception has come from psychophysical studies conducted in tightly-controlled laboratory settings. An enduring challenge, however, lies in extrapolating this knowledge to the noisy conditions that characterize our actual visual experience. Here we combine statistical models of visual perception with empirical data to explore how chromatic (hue/saturation) and achromatic (luminant) information underpins the detection and classification of stimuli in a complex forest environment. The data best support a simple linear model of stimulus detection as an additive function of both luminance and saturation contrast. The strength of each predictor is modest yet consistent across gross variation in viewing conditions, which accords with expectation based upon general primate psychophysics. Our findings implicate simple visual cues in the guidance of perception amidst natural noise, and highlight the potential for informing human vision via a fusion between psychophysical modelling and real-world behaviour.
rsbl.royalsocietypublishing.org
Research
Cite this article: White TE, Rojas B, Mappes
J, Rautiala P, Kemp DJ. 2017 Colour and
luminance contrasts predict the human
detection of natural stimuli in complex visual
environments. Biol. Lett. 13: 20170375.
http://dx.doi.org/10.1098/rsbl.2017.0375
Received: 14 June 2017
Accepted: 26 August 2017
Subject Areas:
behaviour
Keywords:
human, perception, psychophysics,
sensory ecology, vision
Author for correspondence:
Thomas E. White
e-mail: thomas.white@mq.edu.au
Electronic supplementary material is available
online at https://doi.org/10.6084/m9.figshare.
c.3870385.
Evolutionary biology
Colour and luminance contrasts predict
the human detection of natural stimuli
in complex visual environments
Thomas E. White1, Bibiana Rojas2, Johanna Mappes2, Petri Rautiala3
and Darrell J. Kemp1
1
Department of Biological Science, Macquarie University, North Ryde 2109, Australia
2
Centre of Excellence in Biological Interactions, University of Jyva
¨skyla
¨, Jyva
¨skyla
¨, Finland
3
School of Biology, University of St Andrews, St Andrews KY16 9TH, UK
TEW, 0000-0002-3976-1734; BR, 0000-0002-6715-7294
Much of what we know about human colour perception has come from
psychophysical studies conducted in tightly-controlled laboratory settings.
An enduring challenge, however, lies in extrapolating this knowledge to
the noisy conditions that characterize our actual visual experience. Here
we combine statistical models of visual perception with empirical data to
explore how chromatic (hue/saturation) and achromatic (luminant) infor-
mation underpins the detection and classification of stimuli in a complex
forest environment. The data best support a simple linear model of stimulus
detection as an additive function of both luminance and saturation contrast.
The strength of each predictor is modest yet consistent across gross variation
in viewing conditions, which accords with expectation based upon general
primate psychophysics. Our findings implicate simple visual cues in the gui-
dance of perception amidst natural noise, and highlight the potential for
informing human vision via a fusion between psychophysical modelling
and real-world behaviour.
1. Introduction
The interactions of light and matter offer a rich source of information about the
world, and vision often dominates the sensory ecology of animals. Regardless
of ocular structure, visual processing begins with the absorption of photons
by one or more receptors sensitive to a limited range of wavelengths [1,2]. In
humans, the perception of luminance is mediated by the pooled stimulation
of mid- and long-wavelength cones, which is broadly used to judge form,
motion and texture [3]. This enables rapid characterization of entire panoramas
because the greatest spectral power—hence ‘information’—generally exists in
the achromatic channel [2,4]. Unlike achromatic cues, however, the chromatic
features of stimuli (i.e. hue and saturation) are relatively invariant, and so
tend to be used for higher tasks such as object recognition, categorization
and memory [3,5]. Humans, as old world primates, possess a trichromatic
visual system that enables colour perception via two independent ‘opponency’
channels [6]. One channel arises via comparison of relative stimulation among
mid-versus-long-wave sensitive cones, and the second arises through compar-
ing the stimulation of mid- and long-wave receptors with that of short-wave
receptors [7].
This initial extraction of colour and luminance information is critical for
higher-level cognitive functions and ultimately defines our ability to judge
spatial perspective, detect movement, classify scenes, and locate objects
within them. Present knowledge of how such information is weighted among
these tasks stems from exacting laboratory-based psychophysical study
&2017 The Author(s) Published by the Royal Society. All rights reserved.
on September 21, 2017http://rsbl.royalsocietypublishing.org/Downloaded from
[3,6,7], which has generated precise models of colour percep-
tion [8]. The world at large, however, is visually dynamic.
Information must be continuously integrated, and the most
salient cues may shift with the broader viewing context
[9,10]. An outstanding challenge therefore lies in extrapolat-
ing laboratory-gained knowledge to colour sensation under
the noisy environments that characterize our historical (evol-
utionary) and contemporary visual experience. Integrative,
‘top-down’ approaches that combine physiological knowl-
edge with natural-behavioural data hold particular promise
[11], though remain largely untested in the context of
human visual ecology.
In this study we used empirical data of human perform-
ance in an object detection task to explore which spectral
cues best guide detection and classification amidst gross
visual noise. We used an information-theoretic approach to
define the linear combination of parameters that (additively
and/or interactively) best explained subject performance
in a forest environment under varied visual conditions.
In doing so, we explicitly tested the prediction derived from
primate psychophysics that detection should rely upon both
luminance and chromatic contrasts [3,6,7].
2. Methods
(a) Data provenance
We used data from an experiment in which human viewers were
tasked with finding objects under two forest-light environments
[12]. The focal stimuli consisted of paraffin wax models of four
different morphs of the dyeing poison frog Dendrobates tinctorius
(fig. 1e–h in [12]), whose patterns differed in the arrangement
and constitution of ‘yellow’, ‘blue’ and ‘black’ patches (see [12]
for model-construction details). Reflectance spectra (figure 1a)
were captured from representative patches using an OceanOptics
USB4000-FL spectrometer and a PX-2 pulsed xenon light source,
calibrated against a Spectralon (Labsphere, Congleton, UK) white
standard. Measurements of a haphazard sample of leaf-litter
background material—upon which models were presented—
were collected at the same locality.
During the human-detection assay, 20 model stimuli (five of
each morph) were placed randomly in two 6 6 m quadrats that
differed in their light environment; one was located under a large
canopy gap, and the other under a wholly closed-canopy. All
trials were conducted on a single overcast day between 07.00
and 11.00, in an effort to minimize within-treatment environ-
mental variation between trials. Twenty-five volunteers (12
women, 13 men) were asked to find as many models as possible
in each quadrat within 30 s. Twelve of the participants started in
the canopy gap environment, while the other 13 started in the
closed forest environment. Upon completing their search in
the first environment, each participant repeated the task in the
second environment. Participants had no experience with the
focal stimuli prior to their first trial, and individual trials were
independent from one another.
(b) Visual modelling
We used the CIELAB model of human perception to estimate
the subjective chromatic and achromatic visual information
(a)(b)
(d)(c)
100
80
60
40
20
0
300 600500400 700
wavelength (nm)
100
80
60
40
20
0
10
5
0
0
–5
–5
–10
–10
15
10515
–15
–15
300 600500400 700
wavelength (nm)
reflectance (%)
010
L
b (blue <-> yellow)
a (
g
reen <-> red)
Figure 1. The spectral reflectance (n¼5 samples) of (a) model stimuli components and (b) background material, along with their position in the (c) luminance
and (d) colour-opponent dimensions of the CIELAB model of human colour sensation. The colours of each point/line (yellow, blue, black and brown) approximate
the colour of the elements comprising each model ‘morph’ as seen by a human observer. (Online version in colour.)
rsbl.royalsocietypublishing.org Biol. Lett. 13: 20170375
2
on September 21, 2017http://rsbl.royalsocietypublishing.org/Downloaded from
presented by each stimulus. We used the 10-degree standard
observer colour-matching functions, and modelled all stimuli
under two illumination conditions (‘forest shade’ for full
canopy cover, and ‘blue sky’ to simulate large canopy gaps;
[13]) to capture the environmental treatment effects from the
original study. We otherwise followed standard calculations for
the CIELAB model and its CIELCh cylindrical transformation
[14]. All visual modelling was run using ‘pavo’ for R [15].
As noted above, each stimulus comprised four distinct
coloured elements that varied in their relative proportions;
the yellow, blue, and black paints of the models themselves,
and the brown leaf-litter of the presentation background
(figure 1a,b). We therefore estimated between-element hue, satur-
ation, and luminance contrasts as the mean of the pairwise
differences in each. We also estimated the pairwise ‘colour differ-
ence’ between each patch—which broadly captures the combined
contributions of hue, saturation and luminance contrasts—as the
distance between the centroids of each group in CIELAB/LCh
space calculated using the CIEDE2000 colour-difference formula
(a Euclidean distance adjusted for perceptual non-uniformity).
We then estimated the information offered by each of the four
focal stimuli by combining these values in a way that accounted
for the difference in their relative contribution to the overall
pattern. To estimate the hue, saturation and luminance contrast
generated by stimuli, we simply took the maximum of any
between-patch comparison in each variable, weighted by their
combined relative area.
We estimated the overall colour contrast of stimuli in two
ways, representing subtly different mechanisms by which this
information may be perceived by a human viewer [3]. We esti-
mated the maximum colour difference as above, by taking the
maximum colour difference of any pairwise comparison
weighted by the combined relative area of both elements. We
also estimated the integrated, or average, colour difference by
combining all pairwise colour-difference estimates, and weighting
each by its relative area.
(c) Statistical modelling
We used a restricted maximum likelihood (REML) based infor-
mation-theoretic approach ([16]; and electronic supplementary
material, methods) to rank a set of generalized linear mixed-
effect models (table 1) that represent alternate hypotheses for
the way in which subjective visual cues guide object-detection
in noisy environments, as informed by knowledge of primate
psychophysics [3,6,7]. In all cases we modelled the number of
stimuli detected as a Gaussian response (as supported by nor-
mally distributed data and residuals), and included participant
ID as a random covariate. Six models were constructed of all indi-
vidual and two-way linear combinations of luminance, hue, and
saturation contrasts. A further six models were built using the
same combinations, with the addition of the between-environ-
ment deviation for each factor. That is, the absolute difference
in the mean value of each factor (hue, saturation and/or lumi-
nance contrast) between participants’ starting and finishing
environments, thereby estimating any effects of the order in
which environmental treatments were completed. We included
two models comprised of the main effects and two-way inter-
action of luminance and either hue or saturation, which
represents a differential, shifting reliance on chromatic and achro-
matic cues across the two viewing environments. We built a
further four models of overall ‘colour-difference’ using estimates
of maximum and integrated colour difference individually, as
well as each with their associated between-environment
Table 1. Full model-selection table, detailing the relative strength of candidate models for the relationship between target detection and one or more linear
combinations of: luminance contrast (DL), saturation contrast (DS), hue contrast (Dh), maximum colour difference (DCmax.), integrated colour difference (DC
int.), and the gross between-environment deviation for each (dev.). Estimates of the log-likelihood (LL), adjusted Akaike’s information criterion (AICc), change in
AICc relative to the leading model (DAICc), and relative weights (w) are provided for each model. Bolded estimates denote the most informative models, as
broadly indicated by a relative increase in AICc of less than 2.
model d.f. LL AICc DAICc w
DL1DS52264.18 538.70 0.00 0.505
DL42265.91 540.10 1.35 0.258
DL*DS62264.32 541.10 2.43 0.150
DS42267.95 544.10 5.43 0.033
DSþSdev. 5 2267.40 545.10 6.44 0.020
DLþDSþLdev. þSdev. 7 2265.46 545.60 6.87 0.016
DLþLdev. 5 2268.46 547.30 8.57 0.007
DCint. 4 2270.23 548.70 9.99 0.003
DLþDh52269.26 548.90 10.17 0.003
null (intercept only) 3 2271.62 549.40 10.67 0.002
Dh42271.88 552.00 13.29 0.001
DSþDh52271.02 552.40 13.69 0.001
DCmax. 4 2273.74 555.70 17.01 0.000
DCint. þCdev. 5 2273.43 557.20 18.51 0.000
DL*Dh62272.54 557.60 18.88 0.000
Dhþhdev. 5 2275.82 562.00 23.28 0.000
DSþDhþSdev. þhdev. 7 2273.74 562.10 23.44 0.000
DCmax. þCdev. 5 2276.94 564.20 25.54 0.000
DLþDhþLdev. þhdev. 7 2275.29 565.20 26.54 0.000
rsbl.royalsocietypublishing.org Biol. Lett. 13: 20170375
3
on September 21, 2017http://rsbl.royalsocietypublishing.org/Downloaded from
deviation. Finally, we included an intercept-only model as our
null, which represented stimulus detections as a random process.
We used the R package nlme to build GLMEs, and MuMIn for
information-theoretic model selection [17].
3. Results
The most parsimonious model of stimulus detection indi-
cated a positive contribution of both luminance and
saturation contrast (tables 1 and 2). Of all models tested, it
was approximately twice as informative as the second-best
model (in terms of minimizing the estimated relative
Kullback– Leibler distance; [16]), which included a positive
contribution of luminance contrast alone (DAICc ¼1.35,
w
1
/w
2
¼1.96). Both models clearly outperformed the null
((w
1
þw
2
)/w
null
¼299). The strength of the individual effects
of luminance and saturation contrast were modest (figure 2).
However, the presence of luminance contrast in both leading
models, and the minimal change in log-likelihood between
them despite the extra parameter (table 1, bold), imply a
more fundamental role in stimulus detection. Hue contrast
was uninformative, with all models containing it performing
no better than the null (table 1).
4. Discussion
Extensive laboratory-based work continues to develop our
understanding of the physiology and psychology of human
colour sensation [3,7]. Here we built upon recent empirical
data [12], in an effort to identify the basis of stimulus
detection/classification across complex, natural visual
environments. The most parsimonious models indicated a
simple additive contribution of both luminance and saturation
contrast; brighter and more ‘chromatic’ stimuli were more
likely to be found by human viewers across environments
(table 1; figure 2).
The primacy of luminance contrast as a predictor of detec-
tions is consistent with our knowledge of primate visual
ecology, and likely represents a number of concurrent
processes. For example, reflexive attentional shifts triggered
by the appearance of objects in the visual periphery are
mediated by achromatic, rather than chromatic, cues [18].
Luminance contrast also guides the rapid characterization of
panoramic scenes, and affords the location and fixation of
target objects [4,7]. This is exemplified by recent work on
new world monkeys, in which achromatic contrast alone pre-
dicted individual success in short-range fruit foraging [19].
Finally, this channel mediates the perception of edges and
shapes that underlie finer-scale object recognition; a specializ-
ation echoed in the distribution of receptors across the
human retina [7].
Under variable illumination, chromatic cues provide the
most reliable information about the material properties of
objects (a truth partly credited for the evolution of colour
vision itself; [6,20]). Given that our experimental data were
drawn from a task that demanded both the detection and
categorization of objects amidst noise [12], we would expect
a role for chromatic contrast in leading models (table 1). As
with luminance, the predictive strength of this parameter
(figure 2b) is likely to reflect several visual processes. These
include object detection, segregation and discrimination
under trying conditions (as noted above), along with
higher-level processes involving memory and spatial recall
[3]. However, given that hue is typically a more reliable cue
(a)(b)
5
4
3
2
1
0
487659
luminance contrast
no. detections
0.1 0.70.60.50.40.30.2 0.8
saturation contrast
Figure 2. Conditional plots of predictors from the most parsimonious model of stimulus detection (table 1), comprised of (a) luminance contrast and (b) saturation
contrast. Points denote partial residuals, black lines are the restricted maximum-likelihood fits of a given predictor with the other held at its median value, and
shaded areas demarcate 95% confidence bands.
Table 2. Parameter estimates and standard errors from the most
parsimonious GLME models of stimulus detections (table 1, bold), along
with their overall fit.
model parameter est. s.e. cond. R
2
DLþDSintercept 1.27 0.30 0.20
DL0.15 0.04
DS0.69 0.36
DLintercept 1.52 0.28 0.19
DL0.16 0.04
rsbl.royalsocietypublishing.org Biol. Lett. 13: 20170375
4
on September 21, 2017http://rsbl.royalsocietypublishing.org/Downloaded from
than saturation—which is also susceptible to shifts in
illumination—its lack of influence here is of interest. In the
current context, this is likely a consequence of humans’ ability
to alternate the use of chromatic cues depending on whether
they are diagnostic features of the target [20]. This is further
supported by extensive psychological work demonstrating
that viewers’ selective attention may be captured by locally
salient features of stimuli, such as discrepancies in hue, satur-
ation and/or motion [18,21,22]. The inclusion of saturation
over hue contrast in our most parsimonious models, then,
may simply be a function of the greater range of between-
stimulus variation in that feature (i.e. its particular salience
as a visual cue, or ‘singleton’ [22]). This is of course a general
limitation of our experimental data in that the focal targets
imitate a limited range of natural pattern variation, rather
than the spread of colour and luminance contrasts required
for more general inference.
Accessing the perceptual world of animals remains a fun-
damental challenge, and progress will stem from a diversity
of approaches. Given the ultimate importance of behaviour
in questions of sensory ecology and evolution, underexplored
potential lies in drawing on traditional psychophysical
knowledge to inform manipulative, natural-behavioural
experiments. Our results support the promise of this
approach, and implicate relatively simple cues in guiding
human visual behaviour under naturally dynamic conditions.
Ethics. All experimental data were drawn from a previously published
study [12] for which research permits were obtained from CNRS-
Guyane, and all participants gave written consent for the use of
anonymised data in a scientific publication.
Data accessibility. All raw data are available via Figshare [23].
Authors’ contributions. B.R., J.M., D.J.K. and T.E.W. conceived the present
study, B.R. and P.R. collected the data, T.E.W. analysed the data,
T.E.W. and B.R. wrote the manuscript, and all authors critically
revised the manuscript. All authors approved the final version, and
agree to be held accountable for all aspects of the work.
Competing interests. The authors have no competing interests to declare.
Funding. B.R. was funded by an ASAB Research Grant, and a mobility
grant from the Research Council of the University of Jyva
¨skyla
¨. Both
B.R. and J.M. are funded by the Finnish Centre of Excellence in Bio-
logical Interactions. D.J.K. was supported by the Australian Research
Council through grants DP140104107 and DP160103668.
Acknowledgements. We thank two anonymous reviewers for their
thoughtful suggestions. T.E.W. thanks Elizabeth Mulvenna and
Cormac White for their support.
References
1. Kelber A, Vorobyev M, Osorio D. 2003 Animal colour
vision—behavioural tests and physiological
concepts. Biol. Rev. Camb. Philos. Soc. 78, 81118.
(doi:10.1017/S1464793102005985)
2. Osorio D, Vorobyev M. 2005 Photoreceptor spectral
sensitivities in terrestrial animals: adaptations for
luminance and colour vision. Proc. R. Soc. B 272,
1745 1752. (doi:10.1098/rspb.2005.3156)
3. Gegenfurtner KR, Kiper DC. 2003 Color vision. Annu.
Rev. Neurosci. 26, 181– 206. (doi:10.1146/annurev.
neuro.26.041002.131116)
4. Delorme A, Richard G, Fabre-Thorpe M. 2000 Ultra-
rapid categorisation of natural scenes does not rely
on colour cues: a study in monkeys and humans.
Vision Res. 40, 2187– 2200. (doi:10.1016/S0042-
6989(00)00083-3)
5. Wichmann FA, Sharpe LT, Gegenfurtner KR. 2002
The contributions of color to recognition memory
for natural scenes. J. Exp. Psychol. Learn. Mem.
Cogn. 28, 509– 520. (doi:10.1037/0278-7393.28.
3.509)
6. Vorobyev M. 2004 Ecology and evolution of
primate colour vision. Clin. Exp. Optom. 87,
230 238. (doi:10.1111/j.1444-0938.2004.
tb05053.x)
7. Nathans J. 1999 The evolution and physiology of
human color vision: insights from molecular genetic
studies of visual pigments. Neuron 24, 299– 312.
(doi:10.1016/S0896-6273(00)80845-4)
8. Renoult JP, Kelber A, Schaefer HM. 2015 Colour
spaces in ecology and evolutionary biology. Biol.
Rev. 92, 292315. (doi:10.1111/brv.12230)
9. Blake R, Lee SH. 2005 The role of temporal structure
in human vision. Behav. Cogn. Neurosci. Rev. 4,
21 42. (doi:10.1177/1534582305276839)
10. Schaefer HM, Levey DJ, Schaefer V, Avery ML. 2006
The role of chromatic and achromatic signals for
fruit detection by birds. Behav. Ecol. 17, 784 789.
(doi:10.1093/beheco/arl011)
11. Kemp, DJ, Herberstein ME, Fleishman LJ, Endler JA,
Bennett AT, Dyer AG, Hart NS, Marshall J, Whiting
MJ. 2015 An integrative framework for the appraisal
of coloration in nature. Am. Nat. 185, 705 724.
(doi:10.1086/681021)
12. Rojas B, Rautiala P, Mappes J. 2014 Differential
detectability of polymorphic warning signals under
varying light environments. Behav. Processes 109,
164 172. (doi:10.1016/j.beproc.2014.08.014)
13. Endler JA. 1993 The color of light in forests and its
implications. Ecol. Monogr. 63, 127. (doi:10.2307/
2937121)
14. Westland S, Ripamonti C, Cheung V. 2012
Computational colour science using MATLAB.
New York, NY: John Wiley & Sons.
15. Maia R, Eliason CM, Bitton PP, Doucet SM, Shawkey MD.
2013 Pavo: an R package for the analysis, visualization
and organization of spectral data. Methods Ecol. Evol. 4,
906913. (doi:10.1111/2041-210X.12069)
16. Burnham KP, Anderson DR. 2002 Model selection
and multimodel inference: a practical information-
theoretic approach. New York, NY: Springer.
17. Barton
´K. 2013 MuMIn: multi-model inference.
R package version, 1(5).
18. Theeuwes J. 1995 Abrupt luminance change pops
out; abrupt color change does not. Percept.
Psychophys. 57, 637– 644. (doi:10.3758/
BF03213269)
19. Hiramatsu C, Melin AD, Aureli F, Schaffner CM,
Vorobyev M, Matsumoto Y, Kawamura S. 2008
Importance of achromatic contrast in short-range
fruit foraging of primates. PLoS ONE 3, e3356.
(doi:10.1371/journal.pone.0003356)
20. Oliva A, Schyns PG. 2000 Diagnostic colors mediate
scene recognition. Cogn. Psychol. 41, 176– 210.
(doi:10.1006/cogp.1999.0728)
21. Parkhurst D, Law K, Niebur E. 2002 Modeling the
role of salience in the allocation of overt visual
attention. Vision Res. 42, 107– 123. (doi:10.1016/
S0042-6989(01)00250-4)
22. Theeuwes J. 1992 Perceptual selectivity for color
and form. Percept. Psychophys. 51, 599– 606.
(doi:10.3758/BF03211656)
23. White TE, Rojas B, Mappes J, Rautiala P, Kemp
DJ. 2017 Data from: Simple visual cues predict
the human detection of stimuli amidst
natural noise. Figshare (doi:10.6084/m9.figshare.
5235079)
rsbl.royalsocietypublishing.org Biol. Lett. 13: 20170375
5
on September 21, 2017http://rsbl.royalsocietypublishing.org/Downloaded from
... In order to contain local similarity, we design the structural similarity index measurement (SSIM ) as pre-intensity optimization function to evaluate the local structure similarity between the differing pattern and the corresponding ideal pattern. Based on the theory of references [20,21], the comparison between the ideal pattern and dither pattern focuses on three parts: luminance, contrast and structure. Local structure function (LSF ) should be generated and the structural intensity error is calculated by comparing the LSF between ( , ) and ( , ). ...
... Secondly, we add three segments together to calculate an average segment based on the spectrum relationship between ( ) and 3 ( ). And finally, the (3 )th harmonics ( ) is generated by periodically extending the average segment along the direction three times, as shown in Eq. (20). Then, the intensity residual error ( , ) can be calculated. ...
... The first experiment is to measure a uniform flat board with phasebased optimization, intensity-based optimization and the proposed method under the same projector defocusing levels. Because phaseshifting algorithm with ( + 2)-steps is insensitive to high-order harmonics up to the nth-order [20], the ideal phase is obtained by a Fig. 5. The three-dimensional shape measurement system. ...
Article
Full-text available
There have been active studies on optimized dithering techniques to improve 3D shape measurement quality with image sensor and defocusing projector. These techniques can be classified into intensity-based optimization technology and phase-based optimization technology. However, those phase-based optimization methods are sensitive to the amount of defocusing while intensity-based optimization methods cannot reduce phase errors efficiently. This paper presents an optimization framework. This framework combines structural similarity index measurement and intensity residual error optimization. By applying this optimization framework to patches and tiling the best patch, high quality fringe pattern can be generated for three-dimensional measurement. Both simulation and experimental results show that this proposed algorithm can achieve phase quality improvements and it is robust to various defocusing levels.
... At the smaller spatial scale of forest areas, the category of forest ecosystem services addressing 'cultural and human well-being' is of growing importance relative to the traditional uses of forest resources (e.g., timber, food, medicinal resources) and the importance of the regulatory functions of forests (e.g., water, climate, habitat) [6,[12][13][14][15]. The theoretical background to research associated with the different aspects of forests' effects on human well-being is diverse, given the complexity surrounding cognitive skills and individual human perceptions [16][17][18][19][20]. The subjective human perception is a multisensory process with visual, acoustic, olfactory and haptic parts, but the visual part is considered to be the most important [4,21]. ...
... This contrast of colours may have increased the total frequency of perception of visual diversity, even if the beholder's knowledge (see ecological aesthetics) of tree species was low or the surrounding environment within the forest view was uniform [26,45,81]. Palmer et al. [82] and White et al. [20] demonstrated that contrasts in form, colour, material and light conditions have an effect in terms of the perception of the visual diversity of landscapes and of forest ecosystems. The correlations between different visual components (Table 3) can also produce certain overlay effects, which serve to increase or reduce the relevance of individual components and their meaning [66]. ...
... Damage to individual trees was not described, which was consistent with [93], who differentiated between the experiences of landscapes expressed by laypeople (emotional) and by experts (cognitive). Water was one desired aesthetic component of forests, as found previously by [20]. More regeneration of deciduous tree species was listed as a desirable forest structure, serving to increase diversity and the proportion of 'green' within the forest types [94]. ...
Article
Full-text available
The importance of local forests as places of recreation and human well-being depends very much on their visual impact on human perception. Forest managers, therefore, seek to achieve structural elements or attributes that can be used to enhance the visual aesthetics of managed forest ecosystems. The following survey was undertaken in the Tharandter Forest in Saxony (Germany). The field interviews were focussed on visual aesthetics and acceptance. The statements of the 53 participants in the survey were used to analyse views concerning typical Norway spruce forest types: with the regeneration of deciduous tree species in the background, without regeneration, and with European beech as a second layer in the foreground. The evaluation of the questionnaires confirmed a clear ranking. The forest view with the regeneration of deciduous tree species received the highest number of positive scores, followed by the forest view with beech as a second layer. The forest view characterised by pure and dense Norway spruce trees received the worst rating, differing significantly from the other two, on the basis of the spatial arrangement, visual diversity and acceptance. Linear mixed models demonstrated that visual aesthetics was mostly explained by visual diversity as a result of tree species diversity or mixtures and age structures, the diversity of surrounding structures and colours, ground vegetation or visibility.
... Cone catch models for human vision were created based on photographs of the colour chart, following custom plugins in the toolbox. Images were linearised, normalised and converted to coordinates in human CIE XYZ space, and from there into the CIELab colour space [66], a representation of human colour discrimination, approximating a perceptually uniform colour space, and widely used to assess human colour perception [67,68]. CIELab coordinates account for achromatic and chromatic information, defining a colour along three axes, representing lightness (L) and colour, from green to red (a) and blue to yellow (b). ...
... If more than two colours fit this criterion, we selected the best two matches in terms of distance in CIELab space (∆E) between the paint colours and the median target colour, provided that the paints did not also match the colours of any other target groups just as closely. ∆E was calculated according to the CIEDE2000 formula [68][69][70], an adjustment to Euclidean distance officially adopted by the Commission for International on Illumination (CIE) in 2001 [71], which accounts for some remaining perceptual non-uniformity in the CIELab space and, under appropriate viewing conditions [72], better predicts colour discrimination by humans than previous formulations [69,73,74]. This protocol yielded a total of ten paint colour selections, two each per type of microhabitat specialist or generalist treatment ( Fig. 5; Additional file 4:Tables S6). ...
Article
Full-text available
Background Crypsis by background-matching is a critical form of anti-predator defence for animals exposed to visual predators, but achieving effective camouflage in patchy and variable natural environments is not straightforward. To cope with heterogeneous backgrounds, animals could either specialise on particular microhabitat patches, appearing cryptic in some areas but mismatching others, or adopt a compromise strategy, providing partial matching across different patch types. Existing studies have tested the effectiveness of compromise strategies in only a limited set of circumstances, primarily with small targets varying in pattern, and usually in screen-based tasks. Here, we measured the detection risk associated with different background-matching strategies for relatively large targets, with human observers searching for them in natural scenes, and focusing on colour. Model prey were designed to either ‘specialise’ on the colour of common microhabitat patches, or ‘generalise’ by matching the average colour of the whole visual scenes. Results In both the field and an equivalent online computer-based search task, targets adopting the generalist strategy were more successful in evading detection than those matching microhabitat patches. This advantage occurred because, across all possible locations in these experiments, targets were typically viewed against a patchwork of different microhabitat areas; the putatively generalist targets were thus more similar on average to their various immediate surroundings than were the specialists. Conclusions Demonstrating close agreement between the results of field and online search experiments provides useful validation of online citizen science methods commonly used to test principles of camouflage, at least for human observers. In finding a survival benefit to matching the average colour of the visual scenes in our chosen environment, our results highlight the importance of relative scales in determining optimal camouflage strategies, and suggest how compromise coloration can succeed in nature.
... This is reflective of the diverse demands on signal efficacy, which include the need for salience within noisy viewing environments, and the government of cognitive processes such a generalization, categorization and memorization in viewers [72][73][74][75]. The former is defined by conspicuousness, both internal and external pattern contrast, which relates the structure of signals to their detectability amidst the desaturated hues of natural vistas [22,28,76,77]. The reliable memorization and categorization of stimuli, by contrast, is tied to the features of colour, brightness (albeit less often) and their spatial arrangement [76,78,79]. ...
Article
Full-text available
The combined use of noxious chemical defences and conspicuous warning colours is a ubiquitous anti-predator strategy. That such signals advertise the presence of defences is inherent to their function, but their predicted potential for quantitative honesty-the positive scaling of signal salience with the strength of protection-is the subject of enduring debate. Here, we systematically synthesized the available evidence to test this prediction using meta-analysis. We found evidence for a positive correlation between warning colour expression and the extent of chemical defences across taxa. Notably, this relationship held at all scales; among individuals, populations and species, though substantial between-study heterogeneity remains unexplained. Consideration of the design of signals revealed that all visual features, from colour to contrast, were equally informative of the extent of prey defence. Our results affirm a central prediction of honesty-based models of signal function and narrow the scope of possible mechanisms shaping the evolution of aposematism. They suggest diverse pathways to the encoding and exchange of information, while highlighting the need for deeper knowledge of the ecology of chemical defences to enrich our understanding of this widespread anti-predator adaptation.
... Salience of stimuli in triggering response inhibition and execution might also be related to other features of stimuli, such as luminance, which might enhance stimulus processing in the early visual areas (Corney et al., 2009;Johannes et al., 1995;Pierre et al., 2015;White et al., 2017). In our study, the input value for red, green and blue was maximized and therefore luminance was not uniform (Camgöz, 2000;Dombrowe et al., 2010;Hagtvedt & Brasel, 2017). ...
Article
Full-text available
Processing advantages for particular colors (color‐hierarchies) influence emotional regulation and cognitive functions in humans and manifest as an advantage of the red color, compared with the green color, in triggering response inhibition but not in response execution. It remains unknown how such color‐hierarchies emerge in human cognition and whether they are the unique properties of human brain with advanced trichromatic vision. Dominant models propose that color‐hierarchies are formed as experience‐dependent learning that associates various colors with different human‐made conventions and concepts (e.g., traffic lights). We hypothesized that if color‐hierarchies modulate cognitive functions in trichromatic nonhuman primates, it would indicate a preserved neurobiological basis for such color‐hierarchies. We trained six macaque monkeys to perform cognitive tasks that required behavioral control based on colored cues. Color‐hierarchies significantly influenced monkeys' behavior and appeared as an advantage of the red color, compared to the green, in triggering response inhibition but not response execution. For all monkeys, the order of color‐hierarchies, in response inhibition and also execution, was similar to that in humans. In addition, the cognitive effects of color‐hierarchies were not limited to the trial in which the colored cues were encountered but also persisted in the following trials in which there was no colored cue on the visual scene. These findings suggest that color‐hierarchies are not resulting from association of colors with human‐made conventions and that simple processing advantage in retina or early visual pathways does not explain the cognitive effects of color‐hierarchies. The discovery of color‐hierarchies in cognitive repertoire of monkeys indicates that although the evolution of humans and monkeys diverged in about 25 million years ago, the color‐hierarchies are evolutionary preserved, with the same order, in trichromatic primates and exert overarching effects on the executive control of behavior. Highlights Color hierarchies significantly modulate inhibition ability in macaque monkeys and manifest as an advantage for red color, compared with green and blue, in triggering response inhibition. Color hierarchies significantly modulate action execution in macaque monkeys and appear as an advantage for red color, compared with blue but not to green color, in triggering response execution. The cognitive effects of color hierarchies were not limited to the trial in which the colored cues were encountered but persisted in the following trials in which there were no colored cues on the visual scene. This indicates that simple processing advantage at retina or early visual pathways does not explain the cognitive effects of color hierarchies. Color hierarchies influence cognitive functions in trichromatic macaque monkeys, with the same order that is seen in humans. Our findings suggest that color hierarchies in primate cognition are not resulting from association of colors with human‐made conventions and instead are evolutionary preserved neurobiological adaptations.
... pattern and luminance simultaneously by providing pattern statistics which combine spatial and chromatic properties of colour patterns such as abundance weighted chromaticity measures(Endler & Mielke, 2005, Supplemental Material). The perception of visual contrast is a combination of spatial (relative size and position of colour pattern elements), chromatic (hue and saturation), and achromatic (luminance) properties of a colour pattern due to lower and higher level neuronal processing of visual information (e.g.Pearson & Kingdom, 2002;Shapley & Hawken, 2011;Simmons & Kingdom, 2002;White et al., 2017;Willis & Anderson, 2002). Furthermore, interactions between the absolute and relative size of colour pattern elements and their chromatic and achromatic properties includes simultaneous colour contrast and colour constancy mechanisms that are understood in very few visual systems (e.g.Simpson, Marshall, & Cheney, 2016). ...
Article
Full-text available
To understand the function of colour signals in nature, we require robust quantitative analytical frameworks to enable us to estimate how animal and plant colour patterns appear against their natural background as viewed by ecologically relevant species. Due to the quantitative limitations of existing methods, colour and pattern are rarely analysed in conjunction with one another, despite a large body of literature and decades of research on the importance of spatio‐chromatic colour pattern analyses. Furthermore, key physiological limitations of animal visual systems such as spatial acuity, spectral sensitivities, photoreceptor abundances and receptor noise levels are rarely considered together in colour pattern analyses. Here, we present a novel analytical framework, called the Quantitative Colour Pattern Analysis (QCPA). We have overcome many quantitative and qualitative limitations of existing colour pattern analyses by combining calibrated digital photography and visual modelling. We have integrated and updated existing spatio‐chromatic colour pattern analyses, including adjacency, visual contrast and boundary strength analysis, to be implemented using calibrated digital photography through the Multispectral Image Analysis and Calibration (MICA) Toolbox. This combination of calibrated photography and spatio‐chromatic colour pattern analyses is enabled by the inclusion of psychophysical colour and luminance discrimination thresholds for image segmentation, which we call ‘Receptor Noise Limited Clustering’, used here for the first time. Furthermore, QCPA provides a novel psycho‐physiological approach to the modelling of spatial acuity using convolution in the spatial or frequency domains, followed by ‘Receptor Noise Limited Ranked Filtering’ to eliminate intermediate edge artefacts and recover sharp boundaries following smoothing. We also present a new type of colour pattern analysis, the ‘local edge intensity analysis’ as well as a range of novel psycho‐physiological approaches to the visualization of spatio‐chromatic data. QCPA combines novel and existing pattern analysis frameworks into what we hope is a unified, free and open source toolbox and introduces a range of novel analytical and data‐visualization approaches. These analyses and tools have been seamlessly integrated into the MICA toolbox providing a dynamic and user‐friendly workflow.
Article
The main purpose of this study was to produce reliable, color assessment outcomes to examine the extent to which single and multi‐test protocols in use meet current clinical and occupational needs. The latter include the detection of small changes in chromatic sensitivity as the earliest signs of retinal and/or systemic disease, and the need to assess the class of color vision in congenital deficiency and to quantify severity of loss. Color vision was assessed using Ishihara (IH), Farnsworth Munsell D‐15, City University (CU, 2nd ed.) and Holmes‐Wright type A (HW‐A) lantern tests. All subjects also carried out Colour Assessment and Diagnosis and Nagel anomaloscope tests. The sample included 350 normal trichromats, 1012 deutans and 465 protans (age 31.1 ± 12.4, range 10‐65 years). The results reveal the trade‐off between sensitivity and specificity, depending on the number of errors accepted as a pass on the IH test. The D‐15 and CU tests pass all normals and almost 50% of subjects with color vision deficiency. The HW‐A lantern passes all normals, 22% of deutans and 1% of protans. The multi‐test protocols designed to identify protans and to pass only subjects with mild color loss, pass over 50% of protans and deutans. Many of the subjects who fail exhibit less severe loss of color vision than others who pass. When high sensitivity for detection of congenital deficiency is achieved, single‐test protocols fail many normal trichromats. Multi‐test protocols produce large variability and fail to achieve desired aims.
Preprint
Full-text available
To understand the function of colour signals in nature, we require robust quantitative analytical frameworks to enable us to estimate how animal and plant colour patterns appear against their natural background as viewed by ecologically relevant species. Due to the quantitative limitations of existing methods, colour and pattern are rarely analysed in conjunction with one another, despite a large body of literature and decades of research on the importance of spatiochromatic colour pattern analyses. Furthermore, key physiological limitations of animal visual systems such as spatial acuity, spectral sensitivities, photoreceptor abundances and receptor noise levels are rarely considered together in colour pattern analyses. Here, we present a novel analytical framework, called the ‘Quantitative Colour Pattern Analysis’ (QCPA). We have overcome many quantitative and qualitative limitations of existing colour pattern analyses by combining calibrated digital photography and visual modelling. We have integrated and updated existing spatiochromatic colour pattern analyses, including adjacency, visual contrast and boundary strength analysis, to be implemented using calibrated digital photography through the ‘Multispectral Image Analysis and Calibration’ (MICA) Toolbox. This combination of calibrated photography and spatiochromatic colour pattern analyses is enabled by the inclusion of psychophysical colour and luminance discrimination thresholds for image segmentation, which we call ‘Receptor Noise Limited Clustering’, used here for the first time. Furthermore, QCPA provides a novel psycho-physiological approach to the modelling of spatial acuity using convolution in the spatial or frequency domains, followed by ‘Receptor Noise Limited Ranked Filtering’ to eliminate intermediate edge artefacts and recover sharp boundaries following smoothing. We also present a new type of colour pattern analysis, the ‘Local Edge Intensity Analysis’ (LEIA) as well as a range of novel psycho-physiological approaches to the visualisation of spatiochromatic data. QCPA combines novel and existing pattern analysis frameworks into what we hope is a unified, user-friendly, free and open source toolbox and introduce a range of novel analytical and data-visualisation approaches. These analyses and tools have been seamlessly integrated into the MICA toolbox providing a dynamic and user-friendly workflow. QCPA is a framework for the empirical investigation of key theories underlying the design, function and evolution of colour patterns in nature. We believe that it is compatible with, but more thorough than, other existing colour pattern analyses.
Article
Past events, particularly emotional experiences, are often vividly recollected. However, it remains unclear how qualitative information, such as low-level visual salience, is reconstructed and how the precision and bias of this information relate to subjective memory vividness. Here, we tested whether remembered visual salience contributes to vivid recollection. In three experiments, participants studied emotionally negative and neutral images that varied in luminance and color saturation, and they reconstructed the visual salience of each image in a subsequent test. Results revealed, unexpectedly, that memories were recollected as less visually salient than they were encoded, demonstrating a novel memory-fading effect, whereas negative emotion increased subjective memory vividness and the precision with which visual features were encoded. Finally, memory vividness tracked both the precision and remembered salience (bias) of visual information. These findings provide evidence that low-level visual information fades in memory and contributes to the experience of vivid recollection.
Article
Full-text available
1.Colour patterns are used by many species to make decisions that ultimately affect their Darwinian fitness. Colour patterns consist of a mosaic of patches that differ in geometry and visual properties. Although traditionally pattern geometry and colour patch visual properties are analysed separately, these components are likely to work together as a functional unit. Despite this, the combined effect of patch visual properties, patch geometry, and the effects of the patch boundaries on animal visual systems, behaviour and fitness are relatively unexplored. 2.Here we describe Boundary Strength Analysis (BSA), a novel way to combine the geometry of the edges (boundaries among the patch classes) with the receptor noise estimate (ΔS) of the intensity of the edges. The method is based upon known properties of vertebrate and invertebrate retinas. The mean and SD of ΔS (mΔS, sΔS) of a colour pattern can be obtained by weighting each edge class ΔS by its length, separately for chromatic and achromatic ΔS. This assumes those colour patterns, or parts of the patterns used in signalling, with larger mΔS and sΔS are more stimulating and hence more salient to the viewers. BSA can be used to examine both colour patterns and visual backgrounds. 3.BSA was successful in assessing the estimated conspicuousness of colour pattern variants in two species, guppies (Poecilia reticulata) and Gouldian finches (Erythrura gouldiae), both polymorphic for patch colour, luminance and geometry. 3D representations of the ΔS of patch edges (Fort Diagrams) of both species show that there is little or negative geometric correspondence between the chromatic and achromatic edges. All individuals have mΔS > 1.5 for both chromatic and achromatic measures, indicating the high within‐pattern contrast expected for display signals. In contrast from what one would expect from sexual selection, all guppies have mΔS less than expected from random contacts between all pairs of patch colour/luminance classes. The correlation between chromatic and luminance ΔS is negative in both species but zero when correlating all possible kinds of edges between the colours of each species and morph indicating non‐random colour geometry. 4.The pattern difference between chromatic and achromatic edges in both species reveals the possibility that chromatic and achromatic edges could function differently. The smaller than random expected mΔS values in guppies suggests an anti‐predator function because guppies are never found without predators. Moreover, mΔS could vary with predation intensity within and among species. BSA can be applied to any colour pattern used in intraspecific and interspecific behaviour. Seven predictions and four questions about colour patterns are presented. 5.In species which are very convex, both chromatic and luminance mΔS change with viewing angle; geometry of signalling is as important as signal geometry. This article is protected by copyright. All rights reserved.
Article
Full-text available
The recognition that animals sense the world in a different way than we do has unlocked important lines of research in ecology and evolutionary biology. In practice, the subjective study of natural stimuli has been permitted by perceptual spaces, which are graphical models of how stimuli are perceived by a given animal. Because colour vision is arguably the best-known sensory modality in most animals, a diversity of colour spaces are now available to visual ecologists, ranging from generalist and basic models allowing rough but robust predictions on colour perception, to species-specific, more complex models giving accurate but context-dependent predictions. Selecting among these models is most often influenced by historical contingencies that have associated models to specific questions and organisms; however, these associations are not always optimal. The aim of this review is to provide visual ecologists with a critical perspective on how models of colour space are built, how well they perform and where their main limitations are with regard to their most frequent uses in ecology and evolutionary biology. We propose a classification of models based on their complexity, defined as whether and how they model the mechanisms of chromatic adaptation and receptor opponency, the nonlinear association between the stimulus and its perception, and whether or not models have been fitted to experimental data. Then, we review the effect of modelling these mechanisms on predictions of colour detection and discrimination, colour conspicuousness, colour diversity and diversification, and for comparing the perception of colour traits between distinct perceivers. While a few rules emerge (e.g. opponent log-linear models should be preferred when analysing very distinct colours), in general model parameters still have poorly known effects. Colour spaces have nonetheless permitted significant advances in ecology and evolutionary biology, and more progress is expected if ecologists compare results between models and perform behavioural experiments more routinely. Such an approach would further contribute to a better understanding of colour vision and its links to the behavioural ecology of animals. While visual ecology is essentially a transfer of knowledge from visual sciences to evolutionary ecology, we hope that the discipline will benefit both fields more evenly in the future.
Article
Full-text available
The world in color presents a dazzling dimension of phenotypic variation. Biological interest in this variation has burgeoned, due to both increased means for quantifying spectral information and heightened appreciation for how animals view the world differently than humans. Effective study of color traits is challenged by how to best quantify visual perception in nonhuman species. This requires consideration of at least visual physiology but ultimately also the neural processes underlying perception. Our knowledge of color perception is founded largely on the principles gained from human psychophysics that have proven generalizable based on comparative studies in select animal models. Appreciation of these principles, their empirical foundation, and the reasonable limits to their applicability is crucial to reaching informed conclusions in color research. In this article, we seek a common intellectual basis for the study of color in nature. We first discuss the key perceptual principles, namely, retinal photoreception, sensory channels, opponent processing, color constancy, and receptor noise. We then draw on this basis to inform an analytical framework driven by the research question in relation to identifiable viewers and visual tasks of interest. Consideration of the limits to perceptual inference guides two primary decisions: first, whether a sensory-based approach is necessary and justified and, second, whether the visual task refers to perceptual distance or discriminability. We outline informed approaches in each situation and discuss key challenges for future progress, focusing particularly on how animals perceive color. Given that animal behavior serves as both the basic unit of psychophysics and the ultimate driver of color ecology/evolution, behavioral data are critical to reconciling knowledge across the schools of color research.
Book
The growing importance of colour science in manufacturing industry has resulted in the availability of many excellent text books: existing texts describe the history and development of the CIE system, the prediction of colour difference and colour appearance, the relationship of the CIE system to the human visual system, and applications of colour science in technology. However, the field of colour science is becoming ever more technical and although practitioners need to understand the theory and practice of colour science they also need guidance on how to actually compute the various metrics, indices and coordinates that are useful to the practicing colour scientist. Computational Colour Science Using MATLAB was published to address this specific need. It described methods and algorithms for actually computing colorimetric parameters and for carrying out applications such as device characterisation, transformations between colour spaces, and computation of various indices such as colour differences. There are a number of reasons why a second edition has now been published. Firstly, the last decade has seen a number of developments that are important but which were not included in the first edition; secondly, some notable topics were omitted from the first edition and are now included as additional chapters in this edition; thirdly the toolbox was originally written to emphasise clarity (for teaching purposes) but somewhat at the expense of performance (the authors now feel that a better balance between clarity and performance can be achieved and therefore all of the MATLAB code has been rewritten); fourthly, the presentation of the text has been rewritten to provide a more logical and consistent presentation; fifthly, the comprehensive use of colour throughout the second edition provides opportunities to include topics that were more difficult to include in the first edition.
Article
The authors used a recognition memory paradigm to assess the influence of color information on visual memory for images of natural scenes. Subjects performed 5%-10% better for colored than for black-and-white images independent of exposure duration. Experiment 2 indicated little influence of contrast once the images were suprathreshold, and Experiment 3 revealed that performance worsened when images were presented in color and tested in black and white, or vice versa, leading to the conclusion that the surface property color is part of the memory representation. Experiments 4 and 5 exclude the possibility that the superior recognition memory for colored images results solely from attentional factors or saliency. Finally, the recognition memory advantage disappears for falsely colored images of natural scenes: The improvement in recognition memory depends on the color congruence of presented images with learned knowledge about the color gamut found within natural scenes. The results can be accounted for within a multiple memory systems framework.
Code
Tools for performing model selection and model averaging. Automated model selection through subsetting the maximum model, with optional constraints for model inclusion. Model parameter and prediction averaging based on model weights derived from information criteria (AICc and alike) or custom model weighting schemes. [Please do not request the full text - it is an R package. The up-to-date manual is available from CRAN].
Article
Color vision starts with the absorption of light in the retinal cone photoreceptors, which transduce electromagnetic energy into electrical voltages. These voltages are transformed into action potentials by a complicated network of cells in the retina. The information is sent to the visual cortex via the lateral geniculate nucleus (LGN) in three separate color-opponent channels that have been characterized psychophysically, physiologically, and computationally. The properties of cells in the retina and LGN account for a surprisingly large body of psychophysical literature. This suggests that several fundamental computations involved in color perception occur at early levels of processing. In the cortex, information from the three retino-geniculate channels is combined to enable perception of a large variety of different hues. Furthermore, recent evidence suggests that color analysis and coding cannot be separated from the analysis and coding of other visual attributes such as form and motion. Though there are some brain areas that are more sensitive to color than others, color vision emerges through the combined activity of neurons in many different areas.
Article
* Recent technical and methodological advances have led to a dramatic increase in the use of spectrometry to quantify reflectance properties of biological materials, as well as models to determine how these colours are perceived by animals, providing important insights into ecological and evolutionary aspects of animal visual communication. * Despite this growing interest, a unified cross-platform framework for analysing and visualizing spectral data has not been available. We introduce pavo, an R package that facilitates the organization, visualization and analysis of spectral data in a cohesive framework. pavo is highly flexible, allowing users to (a) organize and manipulate data from a variety of sources, (b) visualize data using R's state-of-the-art graphics capabilities and (c) analyse data using spectral curve shape properties and visual system modelling for a broad range of taxa. * In this paper, we present a summary of the functions implemented in pavo and how they integrate in a workflow to explore and analyse spectral data. We also present an exact solution for the calculation of colour volume overlap in colourspace, thus expanding previously published methodologies. * As an example of pavo's capabilities, we compare the colour patterns of three African glossy starling species, two of which have diverged very recently. We demonstrate how both colour vision models and direct spectral measurement analysis can be used to describe colour attributes and differences between these species. Different approaches to visual models and several plotting capabilities exemplify the package's versatility and streamlined workflow. * pavo provides a cohesive environment for handling spectral data and addressing complex sensory ecology questions, while integrating with R's modular core for a broader and comprehensive analytical framework, automated management of spectral data and reproducible workflows for colour analysis.