Full Terms & Conditions of access and use can be found at
Cartography and Geographic Information Science
ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/tcag20
Aerial perspective for shaded relief
Bernhard Jenny & Tom Patterson
To cite this article: Bernhard Jenny & Tom Patterson (2020): Aerial perspective for shaded relief,
Cartography and Geographic Information Science
To link to this article: https://doi.org/10.1080/15230406.2020.1813052
Published online: 07 Oct 2020.
Submit your article to this journal
View related articles
View Crossmark data
Aerial perspective for shaded relief
and Tom Patterson
Faculty of Information Technology, Monash University, Melbourne, Australia;
U.S. National Park Service (Retired), Harpers Ferry, WV, USA
Aerial perspective is an essential design principle for shaded relief that emphasizes high elevation
terrain using strong luminance contrast and low elevations with low contrast. Aerial perspective
results in a more expressive shaded relief and helps the reader to understand the structure of
a landscape more easily. We introduce a simple yet eective method for adding aerial perspective
to shaded relief that is easy to control by the mapmaker.
Received 26 May 2020
Accepted 18 August 2020
Swiss style relief shading;
hillshade; shaded relief;
aerial perspective; proximity
Through the use of illuminating and shadowing, shaded
relief images show terrain as a continuous, three-
dimensional surface on a ﬂat map that is easy to under-
stand for most map readers. Cartographers have devel-
oped a series of design principles for creating eﬀective
shaded relief images (Imhof, 1982). These design prin-
ciples include using an illumination direction from the
upper left to avoid the illusion of terrain inversion
(Biland & Çöltekin, 2017), locally adjusting illumination
to accentuate the shape of individual landforms (Brassel,
1974a; Veronesi & Hurni, 2014, 2015), adjusting bright-
ness of major landforms (Kennelly & Stewart, 2006;
Marston & Jenny, 2015), showing ﬂat areas with
a consistent gray tone (Jenny, 2001), and using aerial
perspective to more clearly show the vertical distribu-
tion of elevation (Brassel, 1974a; Imhof, 1982; Patterson,
2001a). As observed in nature, aerial perspective, also
known as atmospheric perspective, is due to haze and
other particles in the atmosphere that scatter light. As
a result, landscape features further away in the back-
ground appear fainter than those in the foreground. The
contrast in luminance and color between foreground
and background features decreases with distance from
the viewer – this is a technique favored by classic pain-
ters for adding depth to landscape paintings.
Cartographers use aerial perspective in shaded relief as
a visual cue to diﬀerentiate between high mountain
summits and lowlands (Figure 1). When a map reader
observes the terrain from above, mountains appear clo-
ser to the reader so there is a stronger contrast between
dark, shaded slopes and bright, illuminated slopes. More
distant topography, such as lowlands, is shown with
reduced contrast. The result is a shaded relief image
that emphasizes the three-dimensional eﬀect and that
clearly distinguishes between high and low elevations.
Our contribution is a simple algorithm for adding
aerial perspective to grayscale shaded relief images.
When developing this method, we had the following
goals: (1) the user should be able to easily control the
application through a minimum number of parameters
that are simple to understand; (2) the algorithm should
be simple to add to existing shading algorithms; (3)
adding aerial perspective should not change the gray
value of ﬂat areas so as to maintain a uniform base
color from which the terrain features rise up from or
fall below; and (4) the method must be ﬂexible in order
to accommodate various shading methods, for example,
methods that combine multiple illumination directions
(e.g., Mark, 1992), locally adjust illumination direction
to terrain features (Brassel, 1974a), or use neural net-
works for transferring shading from manual to digital
shaded reliefs (Jenny et al., 2020). It is to note that our
method is not a physical model of atmospheric scatter-
ing, but an attempt at replicating the design principles of
aerial perspective as used in manual relief shading
2. Related work
In perceptual science, aerial perspective is also called
proximity luminance covariance (Schwartz & Sperling,
1983). Variations in both luminance (Dosher et al.,
1986) and saturation (Dresp-Langley & Reeves, 2014)
are eﬀective depth cues for the human visual system
CONTACT Bernhard Jenny email@example.com
CARTOGRAPHY AND GEOGRAPHIC INFORMATION SCIENCE
© 2020 Cartography and Geographic Information Society
(Ware, 2019), while variation in hue has a weak eﬀect on
perceived distance (Egusa, 1983; Guibal & Dresp, 2004).
While varying color based on elevation and slope expo-
sure to illumination can emphasize aerial perspective
(Imhof, 1982; Jenny & Hurni, 2006; Patterson, 2001b;
Tait, 2002), we focus on aerial perspective for mono-
chromatic grayscale shading in this paper and adjust
Luminance is the main input for shape-from-shading,
the visual system’s ability to extract a three-dimensional
shape from at ﬂat shading. It has been demonstrated
that the human visual system assumes light shines from
above (Kleﬀner & Ramachandran 1992) or above-left
(Gerardin et al., 2007; Sun & Perona, 1998). This expec-
tation of light from above also causes the relief inversion
eect in shaded relief, where mountains are perceived as
valleys and valleys as mountains when illuminated from
below (Bernabé-Poveda & Çöltekin, 2015; Biland &
Çöltekin, 2017; Çöltekin & Biland, 2019).
In computer graphics, aerial perspective is known as
atmospheric depth and is commonly used to convey
depth in computer-generated scenes. Luminance and
saturation are varied with the distance to the observer
(Tai & Inanici, 2012). The atmospheric depth in land-
scape photographs can also be adjusted with computer
graphics algorithms. For example, Zhang et al. (2014)
propose a method inspired by landscape paintings that
adjusts local contrast to increase the illusion of depth in
In cartography, aerial perspective is a key concept in
Imhof’s (1982) seminal work on terrain mapping for
both grayscale and colored relief shading. Imhof identi-
ﬁes the following beneﬁts of aerial perspective: “It
increases the three-dimensional eﬀect, supports the
inter-relationship of generalized forms and prevents
the optical illusion of relief inversion.” Imhof also
warns that aerial perspective should only be “introduced
when there are considerable diﬀerences in elevation,
and even then with great discretion” (Imhof, 1982,
p. 173). Excessively diminishing contrast at lower eleva-
tions could ﬂatten those areas and misrepresent the
actual character of the terrain.
In digital cartography, Yoeli’s pioneering work on ana-
lytical relief shading with digital elevation models (Yoëli,
1965; Yoëli, 1966) was soon followed by Brassel’s (1974a)
suggestion to simulate aerial perspective with a digital
algorithm. Brassel’s method increases the luminance con-
trast of pixels if they are above the mean elevation of the
model and decreases the contrast of pixels below the mean
elevation. Brassel’s method does not change the gray value
of ﬂat areas. For each pixel, the diﬀerence between its gray
value g and the gray value of ﬂat areas gf is computed ﬁrst.
This diﬀerence is then multiplied by an elevation-
dependent scale factor, and the gray value of ﬂat areas gf
is added to compute g0, the gray value with aerial perspec-
tive (Equation (1) – Erratum: we found an error in the
equation in Brassel’s English language paper (Brassel,
1974a), which is corrected in a notice in this issue (Brassel,
2020) based on a German-language publication by Brassel,
1974b). The elevation-dependent scale factor is ezln k,
where ln k is a user-deﬁned parameter to control the
amount of contrast increase and decrease. The relative
elevation zis the result of a linear mapping of elevation
to the range [−1, +1]. The lowest point is mapped to −1,
the mean elevation is mapped to 0, and the highest eleva-
tion is mapped +1 (Brassel, 1974a).
Figure 2 shows a shaded relief image without Brassel’s
aerial perspective (left) and with aerial perspective (cen-
ter). Contrast of low areas is eﬀectively reduced for
elevations below the mean elevation of the terrain
model. However, Brassel’s algorithm tends to increase
contrast for high elevations too strongly; high mountain
ridges in Figure 2 (center) are overly accentuated and
rendered as solid black areas.
Brassel’s original algorithm can be made more ﬂex-
ible by adding a user-deﬁned “pivot” elevation zp for
Figure 1. Shaded relief without aerial perspective (left) and with
aerial perspective (right) (Charleston Peak, Nevada, USA). Aerial
perspective is computed with our method. It reduces contrast of
low elevations, while maintaining strong contrast of high
Figure 2. Shading without Brassel’s aerial perspective (left) and
with aerial perspective with ln k=1.3 (center). User-deﬁned pivot
elevation zp to map elevation z to zbetween −1 and +1 (Glacier
National Park, Montana, USA).
2B. JENNY AND T. PATTERSON
specifying the elevation above which contrast is
increased and below which contrast is decreased
(Jenny, 2000). Figure 2 (right) illustrates the mapping
of elevation z (along the x-axis) to z(along the y-axis)
using two linear equations separated by the pivot eleva-
Serebryakova et al. (2015) applied Brassel’s aerial
perspective to individual watersheds. Because the
watershed boundaries were extracted from a digital ele-
vation model, Serebryakova et al. computed the zfor
individual watersheds instead of the entire elevation
model. This aimed at a more balanced application of
aerial perspective throughout a large elevation model.
An alternative algorithm for simulating aerial perspec-
tive in shaded relief was introduced by Jenny (2001) and
combines three components: (1) the elevation of a pixel
relative to the minimum and maximum elevation of the
terrain model; (2) the relative elevation of the pixel within
a slope, which is determined by tracing a slope line passing
through the pixel; and (3) the diﬀerence between the aspect
angle of the terrain at the pixel and the direction of illumi-
nation. This algorithm has two shortcomings. First, the
tracing of slope lines can get trapped in a local terrain
extremum when the slope line reaches a peak or a pit,
which results in a spotty distribution of the aerial perspec-
tive correction. Second, the orientation toward the illumi-
nation may not be available with some shading methods,
such as when blending multiple shadings with diﬀerent
illumination direction or when computing shadings meth-
ods that do not explicitly model a direction of illumination,
such as shading with neural networks.
As shown by Patterson (2001a), raster graphics software
can add aerial perspective to shaded relief by combining
a contrast-reduced copy of a shaded relief with the original
shading using an elevation mask. Figure 3 illustrates this
technique. In Adobe Photoshop, an adjustment curve layer
is added to a shaded relief (Figure 3, left). The adjustment
curve is shaped to reduce contrast (Figure 3, right), but
does not make major changes to the tone for ﬂat areas (the
vertical peak in the histogram). The eﬀect of the curve on
the original shaded relief is controlled by a mask (high-
lighted in yellow in Figure 3, left); the mask’s pixel values
are modiﬁed elevation values. This method can be used to
increase and decrease contrast with elevation. It provides
ﬂexibility and the option to make local adjustments if
needed. However, this method is rather cumbersome to
control. Our proposed method for adding aerial perspec-
tive described in the following section builds on this
Neural network shading is a recently developed digital
method for replicating hand-drawn shaded relief (Jenny
et al., 2020). First, a deep neural network is trained with
manual shaded relief images, and then the network gener-
ates a shaded relief using a digital elevation model of
another area. The network learns essential design principles
such as locally adjusting the direction of illumination to
accentuate individual terrain features or varying brightness
and contrast to imitate the aerial perspective eﬀect applied
in manual relief shading. Figure 4 shows that neural net-
work shading can imitate the aerial perspective eﬀect to
some degree; high elevations are shown with stronger con-
trast than low elevations. The development of our method
for adjusting aerial perspective was inspired by the desire to
allow users of neural network relief shading to explicitly
and precisely control the amount of aerial perspective.
3. Aerial perspective
Our method for applying aerial perspective to a shaded
relief is inspired by the raster-based technique for reducing
Figure 3. Contrast reduction of low elevations with Adobe Photoshop curve layer (right) and mask consisting of modiﬁed elevation
values (left). Contrast is reduced in the lowlands (Churﬁrsten, Switzerland standard elevation model from Kennelly et al., 2020).
CARTOGRAPHY AND GEOGRAPHIC INFORMATION SCIENCE 3
luminance contrast suggested by Patterson (2001a). We
convert the raster graphics approach to two equations. The
ﬁrst equation reduces the amount of contrast. It imitates
the eﬀect of an adjustment curve and produces grayscale
values with aerial perspective applied everywhere.
The second equation imitates the eﬀect of the elevation
mask and blends the values of the original shaded relief
with the values generated by the ﬁrst equation.
Contrast of a grayscale pixel value v is reduced to v0
with Equation 2, ensuring that the gray value of ﬂat areas
vf does not change (Figure 5, left). The user controls the
amount of contrast reduction with the parameter Δ,
which is the gray value assigned to a hypothetical black
pixel at the lowest elevation in the terrain model.
vfvwith v0;vfand v20;1½
and Δ vf(2)
The ﬁnal gray value v00 is computed by linearly blend-
ing the original pixel value v with v0(Equation 3).
Equation 4 computes the weight w for Equation 3.
v00 ¼wv0þ1wð Þ v(3)
k with w, z and zt20;1½ and k0 (4)
In Equation 4, z is the elevation of the pixel scaled to the
range 0;1½ with z¼zzmin
ð Þ=zmax zmin
ð Þ, and zmin
and zmax are the lowest and highest elevation in the model.
Parameter k controls the vertical distribution of the aerial
perspective eﬀect. The elevation threshold parameter zt
limits the eﬀect to low elevations. Figure 5 (right) illustrates
Equation 4. In this example, the elevation threshold zt was
set to 0.7, resulting in aerial perspective adjustment applied
to pixels below 70% of the maximum elevation.
The Δ parameter controls the amount of contrast
reduction. Because Δ is required to be smaller than the
gray value of ﬂat areas (i.e. Δvf), a graphical user
interface can let the user select a parameter s between 0
and 1, which is mapped to Δ with Δ¼svf. This sim-
pliﬁes Equation 2 to v0¼svfþ1sð Þ v. Figure 6
illustrates the eﬀect of increasing Δ. Contrast reduction
Figure 4. Standard Lambertian diﬀuse shading as commonly used for relief shading (left) compared to neural network shading (right).
The neural network shading shows high elevations with stronger contrast than low elevations (North Caucasus, Kabardino-Balkarian
Republic, Russia, SRTM elevation model, 85 × 85 km).
Figure 5. Left: Linear contrast reduction of pixel values v to
v0without changing the gray value of ﬂat areas. Right:
Illustration of Equation 4 for computing blending weight w
from relative elevation z.
4B. JENNY AND T. PATTERSON
is strongest for low elevations and decreases towards high
elevations. A recommendable value for many elevation
models is Δ¼2
The parameter k controls the vertical distribution of the
contrast reduction (Figure 7). The k parameter is less
important than Δ, because the visual eﬀect of diﬀerent
values for k is less intuitive to predict. Designers of software
applications may reduce the number of options oﬀered in
a user interface, and select a constant value for k. If execu-
tion time is of concern, k can be set to 1 or 2, which
eliminates the call of the computationally expensive pow
function. Otherwise, recommendable values for k are
between 1 and 4.
We found that for most applications, zt is best set to
100%, that is, the contrast reduction applies throughout
the entire range of elevation values. With zt always equal
to 100%, the user interface can also be simpliﬁed.
The gray value of ﬂat areas is preserved as with Brassel’s
method; its gray value vf is simple to compute for
Lambertian diﬀuse shading (Yoéli, 1965) by using
a vertical normal vector for ﬂat areas. For shading methods
that do not explicitly model terrain normal vectors (for
example, when transferring shading with a neural net-
work), the value of ﬂat areas can be determined by shading
a small patch of a ﬂat elevation model. Hence, our method
does not require identifying ﬂat areas in a digital elevation
Figure 8 compares our method for aerial perspective simu-
lation to Brassel’s 1974 method. We rendered the top image
with a neural network that was trained with shaded reliefs
created for the Swiss national map series (Jenny et al.,
2020), which produced only a weak aerial perspective eﬀect.
This shading was then enhanced by adding aerial perspec-
tive with Brassel’s 1974 method (middle) and with our
method (bottom). The top shading without explicit aerial
perspective simulation follows established design principles
for relief shading: large landforms are accentuated, the
direction of illumination adjusts to important landforms,
and a consistent tone is applied to ﬂat areas. However, the
aerial perspective eﬀect is weak; the highest peaks are
shown with the same contrast in luminance as the lowland.
The aerial perspective added with Brassel’s method cre-
ates a very strong luminance contrast (Figure 8, middle).
While the contrast of lowland areas is appropriately
reduced (compare the left-most area on the top and the
middle ﬁgure), the shaded slopes of high elevations become
excessively dark. The darkest and also the brightest areas
show very little tonal variation, because Brassel’s aerial
perspective simulation can result in dark values that are
negative and need to be corrected to a zero value, or
extremely bright values that are greater than the maximum
representable value for white. This results in the loss of
subtle shading gradients and eliminates small details in the
highest areas. This loss of information due to “overshooting
white” and “undershooting black“ values is exacerbated by
the overall increase of image brightness that is necessary
Figure 6. Increasing amount of aerial perspective n from top left
to lower right; vf is the value of ﬂat areas between 0 and 1; k¼
1:5 for all four shadings (Valdez, Alaska standard elevation
model from Kennelly et al., 2020).
Figure 7. Increasing k from top left to lower right; Δ¼vf for all
CARTOGRAPHY AND GEOGRAPHIC INFORMATION SCIENCE 5
before combining the shaded relief with other raster and
vector features for the ﬁnal map.
The aerial perspective added with our method only
reduces the absolute diﬀerence between bright and dark
pixels and avoids losing information that may occur
with Brassel’s method (Figure 8, bottom). The overall
image is considerably brighter than with Brassel’s
method. The transition in contrast from lowlands to
the highest peaks is subtle, as suggested by Imhof
(1982, p. 173), but there is still a clear diﬀerence in
Figure 8. Shaded relief without aerial perspective (top), with aerial perspective applied using Brassel’s method (middle), and our
method (bottom) (Wasatch Range, Utah, USA, the elevation model and shaded relief images are available at Jenny & Patterson, 2020).
6B. JENNY AND T. PATTERSON
luminance contrast between the highest peaks and the
considerably lower foothills. Figure 9 shows the diﬀer-
ence between the original shading without aerial per-
spective of Figure 8 (top) and the shaded relief with
aerial perspective computed with our method in
Figure 8 (bottom). White and bright gray in Figure 9
indicate areas that were not or only minimally changed
by aerial perspective. As expected, these areas include
ﬂat planes on the left of Figures 8 and 9, where the tone
of ﬂat plains did not change. The highest peaks also did
not change much, which conﬁrms visually that contrast
reduction decreases gradually with increasing elevation.
Dark red indicates areas with the largest absolute diﬀer-
ence between the two shadings. These areas are mainly
shaded slopes at low and intermediate elevations, where
contrast reduction is strong.
We introduce a relatively straightforward method that is
easy to add to existing relief rendering software for com-
puting shaded relief from digital elevation models. The
method can be applied after a grayscale value has been
computed from an elevation model using any digital
shading algorithm. It builds on a raster graphics method
for elevation-dependent adjustment of contrast, but our
method makes it easier for the user to control and adjust
the amount and the vertical distribution of the aerial
perspective eﬀect. Our method also improves upon an
existing method by Brassel (1974a) by not excessively
darkening high elevations. In many cases, the resulting
lighter shaded relief is applicable to cartographic produc-
tion “as is” without the need for additional adjustments.
The authors thank the anonymous reviewers for their valu-
able comments and Brooke E. Marston for copy editing this
No potential conﬂict of interest was reported by the authors.
Bernhard Jenny http://orcid.org/0000-0001-6101-6100
Tom Patterson http://orcid.org/0000-0003-4813-895X
Data Availability Statement
Java source code for applying aerial perspective to a shaded
relief image is openly available in the Zenodo repository,
combined with sample data for creating Figure 8 (shaded
relief images without and with aerial perspective, and the
corresponding digital elevation model of the Wasatch
Range, Utah, USA) at Jenny and Patterson (2020) “Aerial
Perspective for Shaded Relief”, https://doi.org/10.5281/
Bernabé-Poveda, M. A., & Çöltekin, A. (2015). Prevalence of
the terrain reversal eﬀect in satellite imagery. International
Journal of Digital Earth, 8(8), 640–655. https://doi.org/10.
Biland, J., & Çöltekin, A. (2017). An empirical assessment of
the impact of the light direction on the relief inversion
eﬀect in shaded relief maps: NNW is better than NW.
Figure 9. Absolute diﬀerence between shading without aerial perspective (Figure 8 top) and with aerial perspective (Figure 8 bottom).
White indicates areas that were not changed by aerial perspective; dark red indicates the largest absolute diﬀerences.
CARTOGRAPHY AND GEOGRAPHIC INFORMATION SCIENCE 7
Cartography and Geographic Information Science, 44(4),
Brassel, K., (2020), Correction to Brassel. (1974). Cartography
and Geographic Information Science. Advance online publi-
Brassel, K. (1974a). A model for automatic hill-shading. The
American Cartographer, 1(1), 15–27. https://doi.org/10.1559/
Brassel, K. (1974b). Ein Modell zur automatischen
Schräglichtschattierung. In K. Kischbaum & K.-H. Meine
(Eds.), International yearbook of cartography (pp. 66–77).
Çöltekin, A., & Biland, J. (2019). Comparing the terrain reversal
eﬀect in satellite images and in shaded relief maps: An exam-
ination of the eﬀects of color and texture on 3D shape percep-
tion from shading. International Journal of Digital Earth, 12
(4), 442–459. https://doi.org/10.1080/17538947.2018.1447030
Dosher, B. A., Sperling, G., & Wurst, S. A. (1986). Tradeoﬀs
between stereopsis and proximity luminance covariance as
determinants of perceived 3D structure. Vision Research, 26
(6), 973–990. https://doi.org/10.1016/0042-6989(86)90154-9
Dresp-Langley, B., & Reeves, A. (2014). Eﬀects of saturation
and contrast polarity on the ﬁgure-ground organization of
color on gray. Frontiers in Psychology, 5, 1136. https://doi.
Egusa, H. (1983). Eﬀects of brightness, hue, and saturation on
perceived depth between adjacent regions in the visual ﬁeld.
Perception, 12(2), 167–175. https://doi.org/10.1068/p120167
Gerardin, P., de Montalembert, M., & Mamassian, P. (2007).
Shape from shading: New perspectives from the Polo Mint
stimulus. Journal of Vision, 7(11), 13. https://doi.org/10.1167/
Guibal, C. R., & Dresp, B. (2004). Interaction of color and
geometric cues in depth perception: When does “red” mean
“near”? Psychological Research, 69(1–2), 30–40. https://doi.
Imhof, E. (1982). Cartographic relief presentation. De Gruyter.
Jenny, B. (2000). Computergestützte Schattierung in der
Kartograﬁe/Estompage assisté par ordinateur en cartogra-
phie [Unpublished master’s thesis]. ETH Zürich. http://
Jenny, B. (2001). An interactive approach to analytical relief
shading. Cartographica: The International Journal for
Geographic Information and Geovisualization, 38(1&2),
Jenny, B., Heitzler, M., Singh, D., Farmakis-Serebryakova, M.,
Liu, J. C., & Hurni, L. (2020). Cartographic relief shading
with neural networks. IEEE Transactions on Visualization
and Computer Graphics.
Jenny, B., & Hurni, L. (2006). Swiss-style colour relief shading
modulated by elevation and by exposure to illumination.
The Cartographic Journal, 43(3), 198–207. https://doi.org/
Jenny, B., & Patterson, T. (2020) Aerial perspective for shaded
relief [Data set]. Zenodo. https://doi.org/10.5281/zenodo.
Kennelly, P., Patterson, T., Jenny, B., Huﬀman, D., Marston, B.,
Bell, S., & Tait, A. (2020). Elevation models for reproducible
evaluation of terrain representation. The Cartographic
Journal. Advanced online publication. https://doi.org/10.
Kennelly, P. J., & Stewart, A. J. (2006). A uniform sky
illumination model to enhance shading of terrain and
urban areas. Cartography and Geographic Information
Science, 33(1), 21–36. https://doi.org/10.1559/
Kleﬀner, D. & Ramachandran, V. S. (1992). On the perception
of shape from shading. Perception & Psychophysics, 52(1),
Mark, R. (1992). Multidirectional, oblique-weighted, shaded-
relief image of the Island of Hawaii. No. 92-422. US Dept. of
the Interior, US Geological Survey.
Marston, B. E., & Jenny, B. (2015). Improving the representa-
tion of major landforms in analytical relief shading.
International Journal of Geographical Information Science,
29(7), 1144–1165. https://doi.org/10.1080/13658816.2015.
Patterson, T., (2001a). Creating Swiss-style shaded relief in
Patterson, T. (2001b). See the light: How to make illuminated
shaded relief in Photoshop 6.0. http://www.shadedrelief.
Schwartz, B. J., & Sperling, G. (1983). Luminance controls the
perceived 3-D structure of dynamic 2-D displays. Bulletin
of the Psychonomic Society, 21(6), 456–458. https://doi.org/
Serebryakova, M., Veronesi, F., & Hurni, L. (2015). Sine wave,
clustering and watershed analysis to implement adaptive illu-
mination and generalisation in shaded relief representations.
In Proceedings of the 27th International Cartographic
Conference, paper 597.
Sun, J., & Perona, P. (1998). Where is the Sun? Nature
Neuroscience, 1(3), 183–184. https://doi.org/10.1038/
Tai, N. C., & Inanici, M. (2012). Luminance contrast as depth
cue: Investigation and design applications. Computer-aided
Design and Applications, 9(5), 691–705. https://doi.org/10.
Tait, A. (2002). Photoshop 6 tutorial: How to create basic
colored shaded relief. Cartographic Perspectives, 42, 12–17.
Veronesi, F., & Hurni, L. (2014). Changing the light azimuth in
shaded relief representation by clustering aspect. The
Cartographic Journal, 51(4), 291–300. https://doi.org/10.
Veronesi, F., & Hurni, L. (2015). A GIS tool to increase the
visual quality of relief shading by automatically changing
the light direction. Computers & Geosciences, 74, 121–127.
Ware, C. (2019). Information visualization: Perception for
design. Morgan Kaufmann.
Yoëli, P. (1965). Analytical hill shading (a cartographic
experiment). Surveying and Mapping, 25(4), 573–579.
Yoëli, P. (1966). The mechanisation of analytical hill shading.
The Cartographic Journal, 4(2), 82–88. https://doi.org/10.
Zhang, X., Chan, K. L., & Constable, M. (2014). Atmospheric
perspective eﬀect enhancement of landscape photographs
through depth-aware contrast manipulation. IEEE
Transactions on Multimedia, 16(3), 653–667. https://doi.
8B. JENNY AND T. PATTERSON