ArticlePDF Available

Producing Anaglyphs from Synthetic Images

Authors:

Abstract and Figures

Distance learning and virtual laboratory applications have motivated the use of inexpensive visual stereo solutions for computer displays. The anaglyph method is such a solution. Several techniques have been proposed for the production of anaglyphs. We discuss three approaches: the Photoshop algorithm and its variants, the least squares algorithm proposed by Eric Dubois that optimizes in the CIE color space, and the midpoint algorithm that minimizes the sum of the distances between the anagylph color and the left and right eye colors in CIEL*a*b*. Our results show that each method has its advantages and disadvantages in faithful color representation and in stereo quality as it relates to region merging and ghosting.
Content may be subject to copyright.
Producing Anaglyphs from Synthetic Images
William Sanders, David F. McAllister
Department of Computer Science, North Carolina State University, Raleigh, NC USA 27695-8206
ABSTRACT
Distance learning and virtual laboratory applications have motivated the use of inexpensive visual stereo solutions
for computer displays. The anaglyph method is such a solution. Several techniques have been proposed for the
production of anaglyphs. We discuss three approaches: the Photoshop algorithm and its variants, the least squares
algorithm proposed by Eric Dubois that optimizes in the CIE color space, and the midpoint algorithm that minimizes the
sum of the distances between the anagylph color and the left and right eye colors in CIEL*a*b*. Our results show that
each method has its advantages and disadvantages in faithful color representation and in stereo quality as it relates to
region merging and ghosting.
Keywords: anaglyph, CRT, LCD, Photoshop, CIE, RGB, ghosting, region merge, CIELab
1. ANAGLYPHS
Recent interest in virtual laboratories for distance learning applications have revived research in anaglyphs because
the stereo image can be transmitted efficiently, inexpensive viewing devices can be used and several people can view
the image simultaneously. Anaglyphs require the user to view the image with glasses having different color filters for
each eye. The anaglyph color at a pixel is computed from the left eye color and the right eye color at the pixel. Most
filters for electronic displays are designed so that the left eye filter blocks combinations of blue and green and the right
eye filter blocks red. Blocking means that the color is seen as black or dark gray through the filter. In the discussion that
follows, we use the red/cyan glasses No. 7003 from REEL3D (www.reel3d.com).
We assume no depth information in the scene is available. We consider some of the anomalies that may occur in
creating anaglyphs from stereo pairs. We ignore the important issue of retinal rivalry
2
which can create the appearance
of ghosting. We will restrict our attention to CRTs, but the analysis is similar for other color displays that use the RGB
color system.
2. ANAGLYPH METHODS
The set of representable colors or the color solid on a display using the RGB color system is the unit RGB cube (3-
cube). The cube lies in the 3 dimensional vector space R3. The basis or primaries are the colors red, green and blue.
The "RGB color solid" in the six dimensional vector space R6 is a unit hypercube (6-cube) with 64 vertices
corresponding to the RGB corners of the cube in R3 for the left and right eyes. Counting base 2 we can order the vertices
of the 6-cube: [0,0,0,0,0,0] = [black, black], [0,0,0,0,0,1] = [black, blue], …, [1,1,1,1,1,1] = [white, white]. Anaglyph
methods compute a map from the 6-cube in R6 to R3. We can examine the order and placement of the image of the 6-
cube vertices to gain understanding of the method.
Three of the algorithms we analyze are linear; there is a 3 x 6 matrix representation of the map from R6 to R3 in
each case. We define
v =[r
l
,g
l
,b
l
,r
r
,g
r
,b
r
]
T
which are the RGB coordinates of the left and right eye color
channels. The linear algorithms compute
[r,g,b]
T
= Bv
. The linear methods differ only in the matrix B used to
compute the map.
If a method produces colors that are not representable on the display (a coordinate lies outside the interval [0, 1]),
clipping (projection) is used. Clipping maps nonrepresentable colors to the surface of the 3-cube. A projection map is
linear.
Region merging occurs when adjacent regions of different colors are mapped to the same anaglyph color. This can
affect depth and detail perceived in stereo images
1
. We note that clipping can cause region merging. Ghosting or cross-
talk means that one eye can see part of the image for the other eye.
The spectral distributions of the phosphors on CRTs are shown in Figure 1. They have been uniformly scaled so the
maximum value of red is 1. We include the spectral distributions of the primaries for LCD monitors.
The conversion from RGB space to CIE space requires a linear transformation represented by the matrix C
3
. The
matrix C for the CRT spectral distributions in Figure 1 is
C =
X
R
X
G
X
B
Y
R
Y
G
Y
B
Z
R
Z
G
Z
B
=
11.6638 8.3959 4.65843
7.10807 16.6845 2.45008
.527874 3.79124 24.0604
.
The gamut (on the CIE chromaticity diagram) for the spectral distributions given in Figure 1 is the RGB triangle in
Figure 2.
3. ANAGLYPH FILTERS
The colors visible through a filter depend on the transmission function
f( )
of the filter. The function f specifies
the percentages of each visible wavelength transmitted by the filter. The product of the phosphor spectral distribution
with the transmission function gives the spectral distribution of the phosphor as seen through the filter. The matrices A
l
for the left eye and A
r
for the right eye convert the resulting filtered colors to CIE coordinates. For the red/cyan glasses,
these matrices are
A
l
=
5.42327 .807004 .047325
2.70972 .50201 .0250529
.0000550941 .000411221 .00240686
,A
r
=
.180431 1.6395 2.00309
.448214 6.31551 1.35757
.289201 2.3925 11.062
,
Multiplying these matrices by each color in the 3-cube produces a new color solid in CIE space. The resulting
gamuts for the left and right eyes are labeled in figure 2. The transmission functions for each filter are shown in figure 3.
The color solids intersect only at the origin and hence there is perfect blocking; there can be no ghosting due to
intersection of color solids between the left and right eye views. The color solids in CIE space are shown in Figure 4.
They have been scaled by .25 for comparison with the CIE chromaticity triangle. Although CIE space is not a uniform
color space, the large differences between the three color solids should be observed. The gamuts are shown in Figure 2.
4. SAMPLE IMAGES
For algorithm comparisons we use stereo pair of an Indian mother and her daughter shown in Figure 5
4.
. They are
arranged for cross viewing. The original images were 443 x 389.
The region of comparison is the white rectangle shown in Figure 6 of the left eye view. This is the region [150,
190]
×
[200, 230] in the original scene. Figure 7 shows this 41 x 31 region in the left and right eyes magnified so that
individual pixels are visible.
Low intensities of each primary will appear to be blocked by their respective filters. We call this the blocking
threshold. For example, red intensities below 60/255 = .24 appear to be blocked by the left eye filter.
If the red channels of the pixels in the same area of the anaglyph are not below blocking threshold, the region will
be partially seen in both eyes and cause ghosting. We consider the pixel with coordinates (13, 18) in the region. The
color of this pixel in the right eye view is [216, 197, 161]/255 = [0.85, 0.77, 0.63] (rounded) and in the left eye view is
[104, 22, 18]/255 = [0.41, 0.09, 0.07]. An algorithm which chooses a red channel value between the red channel values
of the left and right eye colors will produce an anaglyph red channel value above the blocking threshold and cause
ghosting.
5. THE PHOTOSHOP ALGORITHM
5.1 Algorithm description
In the original Photoshop algorithm
5
(PS) the red channel of the left eye view becomes the red channel of the
anaglph and vice versa for the blue and green channels of the right eye. This is equivalent to projecting (computing the
least squares approximation) the left eye RGB point to the red axis of the RGB cube (setting the G and B channels to
zero) and the right eye RGB point to the GB plane (thus setting the red channel to zero). See Figure 8. The two resulting
vectors are added to compute the color of the pixel in the anaglyph. This method is linear. The matrix B is
B =
1 0 0 0 0 0
0 0 0 0 1 0
0 0 0 0 0 1
The algorithm ignores the transmission function of the glasses. The computed anaglyph is the same for all filters.
5.2 Region Merge
All colors with the same left eye red channel and the same blue/green channels in the right eye view will be mapped
to the same color.
5.3 Ghosting
The comparison pixel has color [104,197,161]/255 =[.41, .77, .63]. The red channel is not below blocking
threshold, so there will be ghosting as seen in Figure 9.
5.4 Conversion to grayscale
An alternative method converts the left eye image to grayscale first and then projects as described above.
If we use a linear grayscale conversion algorithm (such as the NTSC luminance standard where grayscale = .299r +
.587g + .114b [6]) we premultiply v by a partitioned matrix of the form
G 0
0 I
where I is the 3x3 identity and G is the matrix
G =
1 2 3
0 1 0
0 0 1
where the
i
' s
are nonnegative, sum to 1, and are the coefficients that convert the red channel to grayscale. We then
apply the matrix B above. This gives the new matrix
B =
1 2 3
0 0 0
0 0 0 0 1 0
0 0 0 0 0 1
No clipping is required. Region merge and loss of detail are still problems since the red channel in the right eye
view is ignored.
6. THE MIDPOINT ALGORITHM
6.1 Algorithm description
In the midpoint algorithm (MID) we compute in the uniform CIEL*a*b*
7
space (Lab) the point P that minimizes
the sum of the distances between the transformed filtered left and filtered right eye color values at a pixel. The midpoint
of the line joining the two colors minimizes this sum. We map the left eye color to CIE space using A
l
and then to Lab
space using the Lab transformation. The algorithm is similar for the right eye. We use the D65
7
illuminate values for Xn,
Yn and Zn in the MID map. The results for illuminates A and C were similar.
We apply the inverse map to return to RGB space. We then multiply by N where N is a normalizing diagonal matrix
that ensures the [white,white] vertex in the 6-cube is mapped to the white vertex in the 3-cube.
The algorithm is not linear. The images in Lab space of the primary axes for the A
l
and A
r
color solids in CIE space
are shown in Figure 10. One of the axes for each solid is very short and is difficult to see in the figure.
6.2 Region Merge
All colors in the left and right eye color solids with the same midpoint in Lab space are mapped to the same RGB
color. The image of the vertices of the 6-cube in using MID are shown in Figure 11. Some vertices lie outside the RGB
cube and clipping will be required.
6.3 Ghosting
While the MID method uses the filter properties, severe ghosting can occur because the left and right eye views
may both have relatively high intensity red (or green/blue) components. An example is given in Figure 12. The color of
the comparison pixel is [109, 184, 145]/255 = [.43, .72, .57].
We note that we also tried the midpoint calculation in CIE space which is a linear algorithm. The results were poor.
7. THE DUBOIS ALGORITHM
7.1 Algorithm Description
Eric Dubois
3
suggests least squares (LS) projection in R6 to the 3D subspace spanned by the 6 dimensional
columns of the partitioned 6 x 3 matrix R defined below with right hand side partitioned vector D where the matrix C is
the RGB-CIE conversion matrix defined above:
R =
A
l
A
r
,D =
C 0
0 C
v
The projection minimizes the Euclidean length of the vector R[r,g,b]
T
D, and is the least squares approximation or
projection.
The algorithm also uses scaling by a diagonal matrix N as in the midpoint method. The linear map from R6 to R3 is
therefore
[r,g,b]
T
= N(R
T
R)
1
R
T
D
. The matrix B is
B =
0.4561 .500484 .176381 -.0434706 -.0879388 -0.00155529
-.0400822 -.0378246 -.0157589 .378476 .73364 -.0184503
-.0152161 -.0205971 -.00546856 -.0721527 -.112961 1.2264
The relationship of the RGB and LS gamuts are shown in Figure 13. The LS gamut contains nonvisible colors in
addition to colors not realizable on a CRT.
7.2 Region Merge
Interval analysis shows that because the LS color solid in R3 properly contains the RGB cube, clipping will be
required. Finding colors that map to the same color that are not a result of clipping is not an easy task. We used the
image of the R6 vertices as shown in Figure 14. The larger dots are the extreme vertices.
Figure 15 shows the order of the first 10 vertices of the hypercube mapped to R3. This vertex sequence creates a
parallelopiped whose"front" and "back" planes are very close together (.029 Euclidean distance) corresponding to the
same left eye color (see Figure 16). This is true for all left eye colors of the form [x, y, z] where x, y, and z are 0 or 1 (it
is not true for all left eye colors). The quadrilateral corresponding to the projection of all colors of the form [r,g,b,0,u,v],
u and v in [0,1] is parallel to and overlaps the quadrilateral corresponding to the projection of all colors of the form
[r,g,b,1,u,v], u, v in [0,1]. The region is a bounded by the vertices [r,g,b,0,1,0], [r,g,b,0,1,1], [r,g,b,1,0,0] and
[r,g,b,1,0,1]. The first vertex in a plane (in the binary ordering) is the one that occurs on the black/red edge.
Figure 17 shows how the planes are ordered. Note that if we label the planes from 0 to 7, then plane 0 moves to
plane 1 to plane 3 to plane 5 back to plane 2 to plane 4 to plane 6 to plane 7. Hence there are colors with two different
left eye coordinates (and two different right eye coordinates) that will map to the same RGB color (ignoring the
clipping that is required for colors outside the RGB cube). Examples, shown in Figure 18, are colors 1 = [.904, 0, 0, 1,
.25, .5] and colors 2 = [0.904107, 0, 0, 0, 0.765829, 0.48873] that have radically different right eye colors. Their LS
colors are both [0.344257, 0.516587, 0.49911].
7.3 Ghosting
The comparison pixel in the Dubois LS calculation is [35, 214, 158]/255 = [.14, .84, .62] which has a red channel
value below blocking threshold and no ghosting occurs in this case. The anaglyph image is Figure 19. For the stereo
pairs studied, the LS method did not demonstrate ghosting.
8. CONCLUSIONS AND DIRECTIONS FOR FUTURE RESEARCH
Linear anaglyph algorithms will always map several different left/right eye colors to the same RGB color. Perhaps
nonlinear algorithms exist that avoid region merge. The results appear to show that for color images the MID method
produces excellent color and detail but may suffer severe ghosting. An anaglyph produced by the LS method is normally
darker with less detail and requires brightening or gamma correction but appears to have no ghosting. Further
investigation of the properties of the LS solid in R3 is necessary. The PS method is easy to implement and works well
for grayscale images but may also suffer ghosting and poor color representation. Because of stereo and color conflicts it
appears that formulating an anaglyph algorithm that always produces good color representation, good detail, no
ghosting, and no region merging is not possible.
We speculate that the ability to ray trace the original 3D scene can help produce a better method. Properties of
anaglyph filters for display devices also need more investigation.
ACKNOWLEDGEMENTS
We wish to thank John W. Fuller of Lee Filters, USA for his help in obtaining the transmission functions used in
this paper, to Joel Trussell of North Carolina State University for sharing his phosphor and LCD spectral distributions
and to Eric Dubois for his help in implementing the LS algorithm.
REFERENCES
1. D. F. McAllister and Preshant D. Hebbar, "Color Quantization Aspects in Stereopsis", SPIE Proceedings
Stereoscopic Displays and Applications II, Volume 1457, pp. 233-241, SPIE, Bellingham,WA, 1991
2. D. F. McAllister (Ed.), Stereo Computer Graphics and other True 3D Technologies, Princeton U. Press, Princeton,
NJ, 1993
3. Eric Dubois, "A Projection Method to Generate Anaglyph Stereo Images," Proc. IEEE Int. Conf. Acoustics Speech
Signal Processing, , vol. 3, pp. 1661-1664, IEEE, Salt Lake City, UT, 2001
4. Ray Hannisian, "Los Huicholes," stereo image on http://www.ray3d.com/road.html/
5. Andrew Woods and John Merritt, "Stereoscopic Display Application Issues," (Short Course Notes), EI '02, SPIE, San
Jose, California, 2002.
6. Randy Crane, A Simplified Approach to Image Processing, Prentice Hall PTR, Upper Saddle River, NJ, 1997
7. Daniel Malacara, Color Vision and Colorimetry, SPIE Press, Bellingham, WA, 2002
FIGURES
Figure 1: CRT (left) and LCD spectral distributions
400 450 500 550 600 650 700
0.2
0.4
0.6
0.8
400 450 500 550 600 650 700
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Figure 3: Transmission functions for REEL3D filters
0
.
1
.
2
.
3
.,
4
.
5
.
6
.
1
.
2
.
3
.
4
.
5
.
6
RGB
white
Right
eye
white
Left
eye
white
RGB
gamut
Left
eye
gamut
Right
eye
gamut
X
Y
Z
RGB
right
left
Figure 2: Gamuts in CIE space Figure 4: Color solids in CIE for RGB, left and right eyes
Figure 5: Stereo pair ( use cross viewing) Figure 6: Region in left eye view
R
G
B
left
right
Figure 7: Right and left eye regions upscaled Figure 8: Projection is PS algorithm
0
10
20
30
-50
-25
0
25
-40
-20
0
20
Figure 9: PS anaglyph Figure 10: Images in Lab space of the primary axes
for the A
l
and A
r
color solids
Figure 11: 6-cube vertex plot with 3-cube Figure 12: MID anaglyph
Figure 13: RGB and LS gamuts Figure 14: LS Images of 6-cube vertices
B
R
M
W
BL
G
C
Y
000000
000001
000010
000011
000100
000101
000110
000111
001000
Figure 15: Order of first 10 vertices Figure 16: Two quadrilaters with same left eye coordinates
000
001
010
011
100
101
110
111
R
Y
W
C
G
M
B
BL
Figure 17: Order of planes of left eye colors
Figure 18: Color pairs which map to same color in LS Figure 19: LS anaglyph
... Due to stereo and color conflicts it appears to be impossible to develop an algorithm for anaglyph creation which always produces good color representation, good details, and is permanently free from typical artifacts, such as ghosting, and region merging. Well-grounded techniques for color anaglyph generation for display is described in [2,4,6,9]. Several techniques have been proposed for the production of anaglyphs for viewing on displays. In [9] three approaches are discussed: Photoshop algorithm (PS) and its variants, the least squares algorithm (LS) proposed by Eric Dubois, that optimizes colors in the CIE space, and the midpoint algorithm (MID) that minimizes the sum of the distances between the anaglyph color and the left and right eye colors in CIE L*a*b*. ...
... Several techniques have been proposed for the production of anaglyphs for viewing on displays. In [9] three approaches are discussed: Photoshop algorithm (PS) and its variants, the least squares algorithm (LS) proposed by Eric Dubois, that optimizes colors in the CIE space, and the midpoint algorithm (MID) that minimizes the sum of the distances between the anaglyph color and the left and right eye colors in CIE L*a*b*. Linear anaglyph algorithms will always map several different left/right eye colors to the same RGB color. ...
Conference Paper
Full-text available
Nowadays stereophotography is rapidly developing, providing a plenty of sources for stereoimages. The goal of current technology – to provide users with possibility to get high quality 3D anaglyph prints for education and entertainment. To do so it is necessary to agree color characteristics of glasses and printed colors, since errors in color transmission lead to cross-talk interference and ghosting effects. There is no easy way for user to adjust colors of anaglyph in order to coordinate characteristics of glasses and printer. We propose a technique that allows generating anaglyphs with colors adapted to given glasses and printer colors by means of special color pattern analysis. In addition, our approach takes into account the size of the printed anaglyph image. Resulting printed images have a good quality that is confirmed by user opinion survey. The images contain fewer artifacts and look better in comparison to anaglyphs without adaptation, which are generated in existing software applications. The technique utilizes a low amount of memory and has low computational complexity.
... Anaglyph stereo systems traditionally suffer from strong crosstalk. This can be reduced through heuristic thresholding [149] or blurring [62]. If the spectral distribution of the display device and the transmission functions of the anaglyph filters are known, crosstalk can be calculated and removed [120], but this information is not always readily available. ...
Article
Full-text available
Visual discomfort is a significant obstacle to the wider use of stereoscopic 3D displays. Many studies have identified the most common causes of discomfort, and a rich body of literature has emerged in recent years with proposed technological and algorithmic solutions. In this paper, we present the first comprehensive review of available image processing methods for reducing discomfort in stereoscopic images and videos. This review covers improved acquisition, disparity re-mapping, adaptive blur, crosstalk cancellation and motion adaptation, as well as improvements in display technology.
... As well, the VPN for each view is rotated inward slightly to simulate the convergence of the eyes on objects of interest, resulting in a virtual convergence plane, at which virtual objects will appear to be at the same depth as the physical screen. Anaglyphic stereo For maximum hardware compatibility, GLuskap supports stereographic display using anaglyphs [SM03]. In this mode, the left and right views have red and blue filters applied, respectively, in software. ...
... In order to compare the proposed method to the anaglyph reconstruction, we implemented the original Photoshop (PS) algorithm presented in [20] exposing the results in figure 7. It can be concluded analyzing this figure that PS algorithm for anaglyph construction allows sufficiently acceptable 3D perception in the video sequences, but more ghosting effects can be seen here in comparison with the proposed method results (see Figure 4). Additionally, we found that the time processing values for Photoshop algorithm are much more than in the case of the proposed algorithm usage. ...
Article
Full-text available
In this paper, a novel method that permits to generate 3D video sequences using 2D real-life sequences is proposed. Reconstruction of 3D video sequence is realized using depth map computation and anaglyph synthesis. The depth map is formed employing the stereo matching technique based on global error energy minimization with smoothing functions. The anaglyph construction is implemented using the red component alignment interpolating the previously formed depth map. Additionally, the depth map transformation is realized in order to reduce the dynamic range of the disparity values, minimizing ghosting and enhancing color preservation. Several real-life color video sequences that contain different types of motions, such as translational, rotational, zoom and combination of previous ones are used demonstrating good visual performance of the proposed 3D video sequence reconstruction..
Chapter
Stereo photography and modern digital colour printing provide users with the possibility of obtaining high-quality 3D anaglyph prints for education and entertainment purposes. However, one of the main challenges in reproducing printed stereo images is the need to match the colour characteristics of the stereo-glasses and printed colours, since errors in colour transmission lead to cross-talk interference and ghosting effects. This chapter is devoted to resolving the anaglyph colour adjustment problem to align the characteristics of the glasses and printer. The described technique makes it possible to print anaglyphs with colours adapted to given glasses and printer colours by means of special colour pattern analysis. The resulting printed images have a good quality that is confirmed by a user opinion survey. In accordance with the mentioned survey, the produced anaglyph images contain fewer artefacts and look better in comparison to conventional anaglyphs without adaptation.
Chapter
This chapter describes and characterizes the viewing geometry and then presents compatible shooting geometries. It then studies the potential distortions in perceived scenes that a combination of these shooting and viewing geometries may cause. The relations between these distortions and the parameters of the geometries used allows one to propose a specification methodology for the shooting geometry, which can ensure that scenes are perceived with a set of arbitrarily selected possible distortion on the 3DTV device used. Lastly, the chapter provides practical details on how to use this methodology in order to place and configure virtual cameras when calculating synthetic content for 3DTV, and in particular explains the camera arrangement in OpenGL applications. Principally aimed at autostereoscopy, the chapter can be applied to a number of areas in 3D visualization in line with, for example, biomedical imaging, audiovisual production and multimedia production.
Article
Purpose: This study aimed to compare and assess changes of visual functions in viewing an anaglyph 3D image. Methods: Visual functions were examined before and after viewing a 2D image and an anaglyph 3D image with red-green glasses on seventy college students (mean age = 22.292.19 years). Visual function tests were carried out for von Graefe phoria test, accommodative amplitude test by (-) lens addition, negative relative accommodation (NRA) and positive relative accommodation (PRA) test, negative relative convergence (NRC) and positive relative convergence (PRC) test, accommodative facility, and vergence facility test. Results: Assessment of the visual functions indicated that near exophoria and accommodative amplitude were reduced after viewing a 3D image, and although there were small changes in relation to these findings, NRC and PRC showed tendencies to increase and decrease at near, respectively. There were no significant changes with NRA and PRA, and accommodative and vergence facility were shown to have improved. Conclusions: Changes of visual functions were more in the 3D image than the 2D image, especially at near than distance. Particularly, the improvement of accommodative and vergence facility could be related to an effect of subsequent accommodation and vergence shift to have stereopsis in the 3D image. These results indicate that an anaglyph 3D image may, to some extent, be the effect of vision training such as anaglyphs.
Conference Paper
The 2D to 3D conversion is currently a hot topic for several applications because of the 3D content lack in a new era of different hardware. The proposed algorithm in 3D reconstruction is based on the wavelets, especially on the wavelet atomic functions (WAF), which are used in the computation of the disparity maps employing multilevel decomposition, and technique of 3D visualization via color anaglyphs synthesis. Novel approach performs better in depth and spatial perception than do existing techniques, both in terms of objective SSIM criterion and based on the more subjective measure of human vision that has been confirmed in numerous simulation results obtained in synthetic images, in synthetic video sequences and in real-life video sequences.
Conference Paper
The anaglyph is a widely overlooked method of viewing three-dimensional images on any colored display. This is done by selectively filtering the image through colored lenses. Despite the simplicity of this system, the approach to designing anaglyph images remained largely empirical until a recent mathematical analysis by Eric Dubois. While the methods shown in the said work create good anaglyphs, they still exhibit a large amount of retinal rivalry which makes anaglyphs uncomfortable to view. This paper tackles modifications to the said approach to tackle several anaglyph issues, namely ghosting, retinal rivalry, and color reproduction, simultaneously. Subjective testing showed an improvement in viewer acceptance of images designed using the proposed method.
Conference Paper
Full-text available
An anaglyph image allows the perception of depth when observed through colored glasses such as the familiar red/blue glasses. Although the method is very old, the techniques used to generate anaglyph images are very empirical. This paper describes a projection method to generate anaglyph stereoscopic images using the spectral absorption curves of the glasses, the spectral density functions of the display primaries and the colorimetric properties of the human observer
Article
Full-text available
An anaglyph image allows the perception of depth when observed through colored glasses such as the familiar red/blue glasses. Although the method is very old, the techniques used to generate anaglyph images are very empirical. This paper describes a projection method to generate anaglyph stereoscopic images using the spectral absorption curves of the glasses, the spectral density functions of the display primaries and the colorimentric properties of the human observer. 1.
Article
A report on the effects of quantizing stereo pairs computed for stereopsis is presented. Two methods for quantization of color images -- the octree method, which is elegant and fast, and the MaxMin method, which was designed for pseudo-random dithering -- are compared. Their application to stereo pairs, which have been computed using true color (24 bit), can cause anomalies in the resulting stereo image such as loss of depth, contradictory features leading to binocular rivalry, image discontinuities, and loss of definition. Techniques for image sampling to decrease processing time and color lookup table construction to minimize the anomalies are also discussed. A solution for quantizing stereo animations is suggested.
A Simplified Approach to Image Processing
  • Randy Crane
Randy Crane, A Simplified Approach to Image Processing, Prentice Hall PTR, Upper Saddle River, NJ, 1997
Color Vision and Colorimetry
  • Daniel Malacara
Daniel Malacara, Color Vision and Colorimetry, SPIE Press, Bellingham, WA, 2002
Stereoscopic Display Application Issues
  • Andrew Woods
  • John Merritt
Andrew Woods and John Merritt, "Stereoscopic Display Application Issues," (Short Course Notes), EI '02, SPIE, San Jose, California, 2002.