The Past, Present, and Future of Head Mounted Display Designs
Jannick Rolland* and Ozan Cakmakci
College of Optics and Photonics: CREOL & FPCE, University of Central Florida
Head-mounted displays present a relatively mature option for augmenting the visual field of a potentially mobile user.
Ideally, one would wish for such capability to exist without the need to wear any view-aided device. However, unless a
display system could be created in space, anywhere and anytime, a simple solution is to wear the display. We review in
this paper the fundamentals of head-mounted displays including image sources and HMD optical designs. We further
point out promising research directions that will play a key role towards the seamless integration between the virtually
superimposed computer graphics objects and the tangible world around us.
Keywords: Displays, Mobile Displays, Wearable Displays, Optical System Design, Head-Mounted Displays, Head
Per eye, a HMD is composed of a modulated light source with drive electronics viewed through an optical system
which, combined with a housing, are worn on a user’s head via a headband, helmet, or around an eyeglasses frame.
Emerging technologies include various microdisplay devices, miniature modulated laser light and associated scanners,
miniature projection optics replacing eyepiece optics, all contributing to unique breakthroughs in HMD optics. Because
the source of image formation is critical to the optical design, we shall review in Section 2 various forms of
microdisplay sources, followed by key optical aspects of HMDs. In Section 3, we will discuss HMD optics design. In
Section 4, we will focus on novel emerging technologies, the head-mounted projection display, the occlusion display,
and eyeglasses displays.
2. FUNDAMENTALS OF HEAD-MOUNTED DISPLAYS
2.1 Microdisplay Sources
In early HMDs, miniature monochrome CRTs were primarily employed. A few technologies implemented color field-
sequential CRTs. Then, VGA (i.e. 640x480 color pixels) resolution Active-Matrix Liquid-Crystal-Displays (AM-LCDs)
became the source of choice. Today, SVGA (i.e. 800x600 color pixels) and XGA (i.e. 1280x1024 color pixels)
resolution LCDs, Ferroelectric Liquid Crystal on Silicon (FLCOS),1 Organic Light Emitting Displays (OLEDs), and
Time Multiplex Optical Shutter (TMOS) are coming to market for implementation in HMDs. Table 1 shows a
comparison of various miniature display technologies, or microdisplays. The challenge in developing microdisplays for
HMDs is providing high resolution on a reasonably sized yet not too large substrate (i.e. ~0.6-1.3 inches), and high
uniform luminance which is measured either in footLambert (fL) or Candelas per square meter (Cd/m2) (i.e. 1 Cd/m2
equals to 0.29 fL). An alternative to bright microdisplays is to attenuate in part the scene luminance as has been
commonly done in the simulator industry since its inception. Such alternative may not be an option for surgical
displays. FLCOS displays, which operate in reflection and can be thought of as reflective light modulators, can be
brightly illuminated in telecentric mode; however, innovative illumination schemes must be developed to offer compact
solutions. OLEDs use polymers that emit light when an electrical current is passed through. Their brightness can be
competitive with FLCOS displays, however at the expense of a shorter life span. Another important characteristic often
underplayed in microdisplays is the pixel response time, which if slow, can lead to increasing latency.2The TMOS
technology functions in a field sequential mode by feeding the three primary colors in rapid alternating succession to a
single light-modulating element. Unlike LCD technology that uses color filters, the color is emitted directly from the
panel. Opening and closing of the light modulator allows the desired amount of each primary color to be transmitted.
*Jannick@odalab.ucf.edu; phone:407-823-6870; fax:407-823-6880
Optical Design and Testing II, edited by Yongtian Wang,
Zhicheng Weng, Shenghua Ye, José M. Sasián, Proc. of SPIE Vol. 5638
(SPIE, Bellingham, WA, 2005) · 0277-786X/05/$15 · doi: 10.1117/12.575697
Table 1: Microdisplays (< 1.5 inch diagonal) for HMDs
CRT [AMLCD] FLCOS OLED TMOS
Diagonal Size (inch) > 0.5 > 0.7 > 0.6 > 0.66 > 0.5
Life Span (Hours) 40,000 20,000 -
10,000 - 15,000 <10,000 >100,000
htness (cd/m2 or
~100 <100 300 - 1000 100 - 700 200 - 1000
Contrast ratio 300:1 - 700:1 150:1 - 450:1 Up to 2000:1 150:1 - 450:1 300:1 - 4500:1
Type of Illumination Raster
Pixel Response Time Phosphor
1-30ms* 1-100us <1ms 0.1 -100µs
Colors 16.7M 16.7M 16.7M 16.7M 16.7M
* sub-ms may be obtained using dual-frequency materials
2.2 Image Presentation
Perhaps surprisingly, many deployed VE systems present either monocular or the same images to both eyes. Such
systems require neither change in accommodation nor convergence. Accommodation is the act of changing the power
of the crystalline lens to bring objects in focus. Convergence is the act of bringing the lines of sight of the eyes inward
or outward when viewing near or far objects. In our daily experience, while we are gazing at scenes, our eyes focus and
converge at the same point. Thus, to avoid side effects, HMD systems need to stay within acceptable limits of
accommodation-convergence mismatch, approximately within +-¼ of a diopter.3-4 In monocular or biocular HMDs,
users accommodate at the location of the optically formed images to obtain the sharpest images. In the case of binocular
HMDs, the eyes will converge properly at the 3D location of a 3D object to avoid diplopic (i.e. doubled) vision, while
the images will appear blurred if their optical location, which lies on a single surface in current HMDs, does not fall
within the depth of field of the display optics around the image location.
In practice, driven by far field and near field applications, the unique distance of the optical images can be set either
beyond 6m (i.e. optical infinity), or at about an arm’s length, respectively. Objects within the optics depth of field at a
specific setting will be perceived sharply. Other objects will be perceived blurred. For dual near-far field applications,
multifocal planes displays are necessary.5
2.3 Nonpupil versus Pupil Forming Systems
Three current basic forms of optical design for HMDs are eyepiece, objective-eyepiece combination, and projection
optics. Only the simple eyepiece design is non-pupil forming, because it requires no intermediary image surface
conjugate to the microdisplay within the optics. In this case, the eyes pupils serve as the pupil of the HMD. For each
eye of a user, as long as a possible light path exists between any point on the microdisplay and the eye, the user will see
the virtual image of that point. An advantage of non-pupil forming systems is the large eye-location volume provided
behind the optics. Their main disadvantage is the difficulty in folding the optical path with a beam splitter or a prism
without making a significant trade-off in field-of-view. Unfolded optics prohibits see-through capability and balancing
the weight of the optics around the head.
Pupil forming systems on the other hand consist of optics with an internal aperture which is typically conjugated to the
eye pupils. A mismatch in conjugates will cause part or the entire virtual image to disappear, and therefore large enough
pupils must be designed. The requirements for pupil size should be tightly coupled with the overall weight, ergonomics
Proc. of SPIE Vol. 5638 369
of the system, field of view, and optomechanical design. Ideally, 15-17 mm pupils are preferred to allow natural eye
movements, however 10 mm pupils have also been designed successfully (e.g. the Army’s IHADSS HMD), and as
small as 3 mm binoculars are commonly designed.
2.4 Telecentricity Requirement
Whether in object or image space, telecentric optics operates with a pupil at optical infinity in that space. In the
telecentric space, the chief rays (i.e. the rays from any point on the microdisplay that pass through the center of the
pupil) are parallel to the optical axis. Telecentricity in microdisplay space is desirable to maximize uniform illumination
across the visual field, however it is not necessarily true because many microdisplays exhibit asymmetry off-axis.
Telecentricity also further imposes that the lens aperture be at least the same size as the microdisplay, which has to be
balanced against the weight constraint. A relaxed telecentric condition is often successfully applied in HMD design.
3. HMD OPTICS
3.1 Immersive versus See-through Designs
HMD designs may be classified as immersive or see-through. While immersive optics refer to designs that block the
direct real-world view, see-through optics refer to designs that allow augmentation of synthetic images onto the real
world.6 Whether immersive or see-through, the optical path may or may not be folded. Ideally, immersive HMDs
target to match the image characteristics of the human visual system. Because it is extremely challenging to design
immersive displays to match both the FOV and the visual acuity of human eyes, tradeoffs are often made. The LEEP
optics was the first large FOV non-pupil forming optics extensively used in the pioneering times of VEs.7The optics
used a non-folded design type. The classical Erfle eyepiece design and other eyepiece designs are shown in the first
three lines of Table 2.
See-through designs more often follow a folded design, particularly optical see-through displays. In such displays, the
optical combiner is a key component in distinguishing designs. In folded designs, the center of mass can be moved more
easily back. Folded designs however, often indicate optical system complexity. A large majority of folded designs use a
dual combiner, where reflections off a flat plate and a spherical mirror combined are used as shown in the second line of
Table 2. Droessler and Rotier used a combination of dual combiner and off axis optics in the tilted cat combiner. In
Antier, various key HMD components were assembled, including a pancake window element close to the eye enabling a
wide FOV eyepiece. The drawback of pancake windows has been their low transmittance of approximately 1-2%,8
however recent advances yield pancake windows with up to 20% transmittance,9 Finally, off-axis optics designs with
toroidal combiners have also been designed, two examples being shown in the last row of Table 2. The use of a toroid
combiner serves to minimize the large amount of astigmatism introduced when tilting a spherical mirror.
3.2 Balancing Field of View and Resolution
Three main approaches have been investigated to increase FOV while maintaining high resolution: high-resolution
insets, partial binocular overlap, and tiling.10-12
3.3 Achieving High-Brightness Displays
Alternatives to microdisplays are laser or laser-diode based scanning displays, which offer brighter displays and target
applications in the outdoor and medical domains. A recent approach is The Virtual Retinal Display (VRD), also called
the Retinal Scanning Display (RSD).13 In such systems, the pupil of the eyes is optically conjugated to the
microscanner exit pupil. As such, a challenge revealed early in the development of the technology was the small exit
pupil (i.e. 1~3mm) within which the eye needed to be located to see the image, which can be overcome by forming an
intermediary image followed by a pupil expander. Many devices have used a projection device, a screen, and an
eyepiece magnifier to expand the viewing volume. The NASA shuttle mission simulator (SMS) rear window is a prime
example of the technology. Controlled angle diffusers have been designed for pupil expansion in HMDs, including
diffractive exit-pupil expanders.14 Given an intermediary image, the VRD also functions with an equivalent
370 Proc. of SPIE Vol. 5638
microdisplay in this case formed using scanned laser light. Thus, optically, the VRD closely approaches other HMD
A recent technology based on scanned laser light is the optical CRT.15 In this approach, a single infrared laser diode is
used and scanned across a polymer thin plate doped with microcrystals. Optical upconversion is used to have the
microcrystal emit light in the red, green, and blue regions of the spectrum. Such technologies built from pioneer work of
Nicolaas Bloembergen..16 The advantage of using a laser diode as opposed to a laser is the suppression of speckle noise.
Table 2: Examples of key HMD optics design forms
Proc. of SPIE Vol. 5638 371
4. RECENT ADVANCES AND PERSPECTIVES
Due to the wide application range, HMDs must be designed for specific tasks. Beside military applications which
dominated the market of HMDs for several decades,17 recent applications include medical, industrial design, visual aid
for daily living, manufacturing as well as distributed collaborative environments.18-21 In this section, we shall discuss
two types of novel HMDs that have yield recent early prototypes, head-mounted projection systems (HMPDs) and
occlusion displays. Other emerging displays in development are multifocal HMDs,5 and eyetracking integrated
4.1 Head Mounted Projection Displays (HMPDs)
A shift in paradigm in HMD design is the replacement of compound eyepieces with projection optics combined with
phase conjugate material (e.g. retroreflective optical material), known as head-mounted projection displays (HMPD).26-
27 A HMPD consists of a pair of miniature projection lenses, beam splitters, and microdisplays mounted on the head as
shown in Fig. 1a and non-distorting retro-reflective sheeting material placed strategically in the environment. Fig.1b
shows a deployable room coated with retro-reflective material known as the Artificial Reality Center (ARC).28 A user
interacting with 3D medical models is shown in Fig. 1c, and a recent side-mounted optics version of the HMPD is
shown in Fig. 1d. Other implementations of retroreflective rooms have been developed.29
Projection optics, as opposed to eyepiece optics, and a
retroreflective screen, instead of a diffusing screen, both
respectively distinguish the HMPD technology from
conventional HMDs and stereoscopic projection systems.
Given a FOV, projection optics can be more easily corrected
for optical aberrations, including distortion, and does not
scale with increased FOV, given the internal pupil to the lens
which is nevertheless re-imaged at the eye via the
beamsplitter oriented at 90ofrom that used in conventional
eyepiece folded optics. The optical design of a 52oFOV
projection optics is shown in Fig.2. 30-31
(a) (b) (c) (d)
Figure 1: HMPD in use in a deployable Augmented Reality Center (ARC): (a) user wearing a HMPD; (b) the ARC; (c) a
user interacting with 3D models in the ARC; and (d) side-mounted optics HMPD
Figure 2: (a) Optical layout of the 52o FOV ultra-
light projection lens showing the diffractive optical
element (DOE) surface and the aspheric surface
(ASP) ; (b) the 52 o optical lens assembly and size.
4.2 Occlusion Displays
Augmented reality application developers and researchers often choose between optical and video see-through mode
displays. A thorough multi-dimensional comparison between the two modes is provided in Rolland (2001).6 Briefly,
many scientists prefer the video see-through mode simply because it is relatively easy to implement occlusions on a
pixel-by-pixel basis. However, video see-through displays potentially suffer from lower resolution of the real world
scene due to subsampling through the cameras, lag due to processing, and the requirement to match the viewpoint of the
eye with the viewpoint of the cameras. Given these drawbacks, it is desirable to choose optical see-through displays if
372 Proc. of SPIE Vol. 5638
they can provide occlusion capability. In the rest of this section we will present the approaches taken to that end.
Occlusion is a strong monocular cue to depth perception and may be required for certain applications. 32
For optical see-through displays, starting with Sutherland’s original head-worn display,33 most conventional optical
designs, even today, will combine computer generated imagery with the real world using a beam splitter.34 Regardless
of the transmittance and reflectance percentages of the beam splitter, the consequence is that some percentage of light
will always be transmitted. Therefore, it is difficult to achieve opaque display of virtual objects that can block the real
world scene, unless the image sources are much brighter than the scene. Alternative mechanisms to the conventional
head-mounted display designs become necessary.
A first order approach to achieving opaque objects could be to dim the light from the scene uniformly across the field of
view of the optics. Liquid crystal shutters, under voltage control, have been used to dim the light from the scene and the
modulated output is combined with the image source. It is conceivable to use electrochromic films to control light levels
from the scene under current control in a similar way as with the liquid crystal shutter, however, eliminating the crossed
polarizers. Finer grained control over regions within the scene requires masks with multiple pixels. A review of early
seeds of occlusion display was provided in Cakmakci et al. (2004).35 The most developed prototype to date is the
ELMO-4 by Kiyokawa et.al.36
A compact geometry that is capable of mutuable occlusions and suitable for a see-through head-worn display is shown
in Fig.3 Polarizing optics and the use a reflective spatial light modulator are the key to achieving a compact geometry.
As depicted graphically in the figure, this system consists of an objective lens, a polarizer, an x-cube prism, a reflective
SLM (e.g., LCOS or DMD), a microdisplay as the image source, and an eyepiece.
The objective lens images the scene onto the SLM telecentrically. The SLM can be modeled as a flat mirror and a
quarter wave plate, in the case of double pass, would rotate linearly polarized light 90 degrees. After the scene is
modulated with the SLM, the modulated output is combined with the microdisplay output using the x-cube prism. The
final combined output is collimated with the eyepiece and delivered to the users’ eye. The field of view of the objective
lens matches the FOV of the eyepiece to ensure unit angular magnification. There will be no distortion for the real scene
in this system due to the symmetry.
Figure 3: First order optical layout of a compact occlusion display
The eye is conjugated to the entrance pupil of the head-worn display and this will cause a viewpoint offset shift of about
3 inches for the recent system we designed. The viewpoint offset shift may impact proprioception when the user
interacts with near field real-world objects.
We now verify the final image will have the desired upright orientation with respect to the eye. This makes clear how
polarizing optics yields a compact geometry without the need for erection optics. The diagram pertinent to verifying
image orientation is shown in Fig 4. The object is indicated with an upright arrow and it is assumed to have an initial
upright orientation. The object is first imaged through the objective lens and has an inverted orientation as indicated in
orientation at step “1” with a solid black line shown in Fig. 4(a). Due to the polarizer, right after the lens, the light will
Proc. of SPIE Vol. 5638 373
be s-polarized, therefore, it will hit the s-reflect coating in the x-cube prism. The orientation upon reflection is shown in
step “2” represented in Fig.4(b) as a solid black line close to the SLM. The SLM will reflect the image and change the
polarization, assuming the pixel is “turned on”.
(a) (b) (c)
Figure 4: Verification of upright image orientation
Caused by this change of polarization, the light will now be p-polarized and therefore hit the p- reflect coating on the x-
cube and will be directed towards the eye as shown in Fig.4(c). The orientation after the p-reflect mirror is shown in
step “3” of Fig.4(c), the final step in the analysis. We can clearly verify that the final image will have an upright
We created a table of specifications for a prototype design implementation. The specifications for the objective and the
eyepiece, which are the same element by design, will be provided in this section. The goal of the objective is to image a
specified field of view on to the SLM. The FOV has been set to 40 degrees full field. The system is designed with a
9mm pupil. The focal length is set to 30.7mm, based on the diagonal length of the LCOS. The horizontal and the
vertical FOV are set to +-15.81 degrees and +-12.77 degrees respectively. The pixel period is on the order of 30
microns, therefore, the maximum spatial frequency will be ~37 cycle/mm. Shown in Fig.5(a) is the layout of a recent
design based on these specifications. The performance characteristics are summarized in Fig.5(b) which shows the
modulation transfer function. This design achieves an above 50% modulation transfer function value at the maximum
spatial frequency of ~37lp/mm. The distortion of the lens is ~5% for the virtual scene which can be corrected in
hardware or software.
7.50 15.00 22.50 30.00
SPATIAL FREQUENCY (CYCLES/MM)
New lens from CVMACR
R 0.5 FIELD ( )10.00O
R 0.7 FIELD ( )14.00O
R 0.8 FIELD ( )17.00O
R 1.0 FIELD ( )20.00O
608.9 NM 1
559.0 NM 2
513.9 NM 1
Figure 5: (a) Layout of the objective lens (b) Modulation Transfer Function
Before building custom optics for the design shown in Fig.5(a), we instrumented a prototyped with commercially
available components to check feasibility of this approach shown in Fig 6. The experimental setup consists of a light
source, a transparency, a diffuser screen, an achromatic lens, a polarizing beam splitter, a liquid crystal shutter, an
LCOS device. We used an additional lens to act as a weak magnifier to assist in taking pictures.
374 Proc. of SPIE Vol. 5638
(a) (b) (c) (d)
Figure 6: from left to right, (a) Optical setup; (b) Original Transparency (c) Field of view of the optics (d) Scene as imaged onto the
LCOS and reflected back (no modulation)
Fig.6 (d) is a photograph of the optical image as would be seen through the head-worn display, with no modulation (no
occlusion) on the original scene. For comparison purposes, Fig.6(c) is a photoshop scaled version of the region of
interest shown in Fig.6(b), therefore, it looks slightly pixelated. In the basic setup, we are imaging a relatively small
field of view and also lens 2 is hardly magnifying the image. The significance of the result is that we can form an
optical image of the scene on the F-LCOS and modulate it for occlusion.
Figure 7: (a) Modulating mask (b) Modulated scene
Fig.7(a) shows the mask signal that will modulate the scene. Fig.7(b) shows the an image of the mask seen through the
lens 2 on the F-LCOS superimposed on the image and in best focus we achieved (within the digital camera capability
that we used to take the pictures). We can observe that head of the child is blocked according to the mask, which can
have practically any shape and can be updated at video rates. This first result, which points to the promise of this new
technology, also points to the need to further work on the engineering aspects of the system to improve the contrast ratio
of the mask which appears to be scene illumination dependent. Finally, such displays will benefit from the coupling of
3d real-time depth extraction for the creation of occlusion masks.
4.3 Eyeglasses based displays
A number of factors including aesthetics and social acceptance will push displays targeting daily visual aids towards
integration with the eyeglasses form factor. It is extremely challenging to fulfill high-performance optical requirements
within this form factor. However, starting with text based interfaces (i.e., time of day, email, notetaking applications,
etc.), we can expect these displays to slowly carve their way into supporting wider fields of view and resolution for
graphical tasks. Upton, in the mid 60’s and 70’s, integrated display systems within eyeglasses, for applications in
speech interpretation assistance. Initial prototypes were based on energization of small lights or lamps mounted directly
on the surface of an eyeglass lens.37 A later prototype around early 70’s embodying small reflecting mirrors on the lens
of the eyeglasses and moving the direct mounting of the light sources away from the lens along with results in being less
noticeable and less obstructive to the wearer’s vision.38 In late 80’s, Bettinger developed a spectacle mounted display
in which the spherical reflective surface of a partially transparent eyeglass lens is employed.39 There has been recent
work in embedding the mirrors into the eyeglasses lens by Spitzer and colleagues.40 Spitzer et.al decided that based on
the ~20x practical magnification of a single lens and their image goals of 28x21cm at 60cm, they would need a 0.7”
Proc. of SPIE Vol. 5638 375
display which they concluded would be too large for concealment in eyeglasses. Therefore, they concentrated on a relay
system built into the eyeglasses frame to move the microdisplay away from the eyeglasses in their initial prototype.
They have demonstrated a system with the overall thickness of the eyeglasses lens less than 6.5mm which fits in the
commercial eyeglass frame.
While since the 1960’s military simulators have driven HMD designs with key tasks in far field visualization with
collimated optics, many other applications from medical to education have emerged that are driving new concepts for
HMDs across multiple tasks up to near field visualization. Today, no HMD allows coupling of eye accommodation and
convergence as one may experience in the real world, yet only few HMDs provide either high resolution or large FOVs,
and no HMD allows correct occlusion of real and virtual objects. HMD design is extremely challenging because of
Mother Nature who gave us such powerful vision in the real world on such a complex, yet small network called our
brains. New constructs and emerging technologies allow us to design yet more and more advanced HMDs year by year.
It is only a beginning. An exciting era of new technologies is about to emerge driven by mobile wearable displays as it
applies to our daily lives in the same way portable phones are glued to the ears of billions of people, as well as to high
tech applications such as medical, deployable military systems, and distributed training and education.
This research was supported by National Science Foundation IIS/HCI-0307189 and the Office of Naval Research
N00014-02-1-0927. This invited paper summarizes sections with updated components of a book chapter from J. Rolland and H. Hua
in the Encyclopedia of Optical Engineering,12 and a paper of O. Cakmaki, Y. Ha, and J. Rolland in the Proceedings of ISMAR2004.35
1. Wu, S.T., and Deng-Ke Yang. Reflective Liquid Crystal Displays. New York, Publisher: John Wiley, June 2001.
2. Adelstein, B.D, Thomas G. Lee, and Stephen R. Ellis. Head tracking latency in virtual environments:
psychophysics and a model. Proceedings of the Human Factors and Ergonomics Society 47th Annual Meeting
3. Wann, J.P., S. Rushton, and M. Mon-Williams. Natural problems for stereoscopic depth perception in virtual
environments. Vis. Res. 1995,35, 2731-2736.
4. Rolland, J.P.; C. Meyer, K, Arthur, and E. Rinalducci. Methods of adjustments versus method of constant stimuli in
the quantification of accuracy and precision of rendered depth in head-mounted displays. Presence: Teleoperators
and Virtual Environments 2002,11(6), 610-625.
5. Rolland, J. P., M. Krueger, and A. Goon, "Multi-focal planes in head-mounted displays," Applied Optics 39(19),
6. Rolland, J. P.; Fuchs, H. Optical versus video see-through head-mounted displays. In Wearable Computers and
Augmented Reality. Caudell, T., Barfield, W. (Eds). Erlbaum, 2001.
7. Howlett, E. M. (1983). Wide angle color photography method and system. U.S. Patent Number 4,406,532.
8. La Russa, J.A., “Image forming apparatus,” US Patent 3,943,203 (1976).
9. Berman A.L., and Meltzer J.E., “Optical collimating apparatus,” US Patent 4,859,031 (1989)
10. Melzer, J.E. Overcoming the field of view: resolution invariant in head mounted displays. Proc. of SPIE, Vol 3362,
Helmet- and Head-Mounted Displays III, R.J. Lewandowski, L.A. Haworth, and H.J. Girolamo (eds), 284-293,
11. Grigsby S.S.; B.H. Tsou. Visual factors in the design of partial overlap binocular helmet-mounted displays. Society
for Information Displays International Symposium Digest of Technical Papers, Vol. XXVI, (1993).
12. Rolland, J.P., and H. Hua. Displays: Head-Mounted. In Encyclopedia of Optical Engineering (2005) (In press).
13. Urey, H. Retinal Scanning Displays. In Encyclopedia of Optical Engineering. Driggers, R. Ed., Marcel Dekker,
14. Urey, H. Diffractive Exit Pupil Expander for Display Applications. Applied Optics 2001,40(32), 5840-5851.
376 Proc. of SPIE Vol. 5638
15. Bass, M.; H. Jenssen. Display medium using emitting particles dispersed in a transparent host. 6327074B1 (2001)
and 6501590B2 (2002).
16. Bloembergen, N. Solid state infrared quantum counters. Physical Review Letters 1959, 2(3), 84-85.
17. Rash, C. E. (Eds.) Helmet-Mounted Displays: Design Issues for Rotary-Wing Aircraft. SPIE Press PM:
18. Caudell, T., Barfield, W. (eds.) Wearable Computers and Augmented Reality. Erlbaum, 2001.
19. Stanney K.M. (Ed.), Handbook of Virtual Environments: Design, implementation, and applications; Lawrence
Erlbaum Associates, Mahwah, New Jersey, 2002.
20. Ong, S.K., Nee, A. Y. C. (Eds.) Virtual and augmented reality applications in manufacturing; Springer-Verlag
London Ltd, June, 2004.
21. Ohata Y., and H. Tamura (Eds). Mixed Reality: merging real and virtual worlds. Co-published by Ohmsha and
22. Iwamoto, K.; Katsumata, S.; Tanie, K. An Eye Movement Tracking Type Head Mounted Display for Virtual
Reality System - Evaluation Experiments of Prototype System, Proceedings of IEEE International Conference on
Systems, Man and Cybernetics (SMC94), pp. 13-18, 1994.
23. Rolland, J.P.; A. Yoshida; L. Davis; J.H. Reif. High resolution inset head-mounted display. Applied Optics 1998,
24. Rolland, J.P., Y.Ha, and Cali Fodopiastis. Albertian errors in head-mounted displays: choice of eyepoints location
for a near or far field task visualization. JOSA A 2004,21(6).
25. Vaissie, L.; Rolland, J. P. Eye-tracking integration in head-mounted displays. U.S. Patent 6,433,760B1, August 13,
26. Fisher, R. Head-mounted projection display system featuring beam splitter and method of making same. US Patent
5,572,229, November 5, 1996.
27. Kijima R.; Ojika, T., Transition between virtual environment and workstation environment, International
Symposium, 1997, In: Proceedings of IEEE Virtual Reality Annual International Symposium, IEEE Comput. Soc.
Press, Los Alamitos, CA, 130-137.
28. Davis L.; Rolland J.; Hamza-Lup F.; Ha Y; Norfleet J.; Pettitt B.; Imielinska C. “Alice’s Adventures in
Wonderland: A Unique Technology Enabling a Continuum of Virtual Environment Experiences,” IEEE Computer
Graphics and Applications, 2003, February, 10-12.
29. Hua, H.; Brown, L.; Gao, C. SCAPE: supporting stereoscopic collaboration in augmented and projective
environments. IEEE Computer Graphics and Applications 2004, January/February, 66-75.
30. Hua, H; Ha, Y; Rolland, J.P. Design of an ultra-light and compact projection lenses. Applied Optics, 2003,42: 97-
31. Ha, Y.; J. P. Rolland. Optical Assessment of Head-Mounted Displays in Visual Space. Applied Optics 2002,
32. Cutting, J.E. and P.M. Vishton (1995). “Perceiving the layout and knowing distances:the integration, relative
potency, and contextual use of different information about depth,” Perception of Space and Motion, ed. By W.
Epstein and S. Rogers, Academic Press, 69-117.
33. Sutherland, I. E. A head-mounted three-dimensional display. Fall Joint Computer Conference, AFIPS Conference
Proceedings, 1968,Vol. 33, 757-764.
34. Melzer, J. E.; Moffit, K. (Eds.) Head Mounted Displays. McGraw-Hill: New York, 1997.
35. Cakmakci, O; Y. Ha, and J.P. Rolland, “A Compact Optical See-through Head-Worn Display with Occlusion
Support. Proceedings of ISMAR 2004, pp16-25
36. Kiyokawa, K; M. Billinghurst; B. Campbell; Eric Woods. “An occlusion-capable optical see-through headmount
display for supporting co-located collaboration. Proceedings of 2003 International Symposium on Mixed and
Augmented Reality: 133-141, 2003.
37. Upton. H.W. Speech and Sound Display System. US3,463,885. Filed Oct. 22, 1965.
38. Upton. H.W. Eyeglass mounted visual display. US3,936,605. Filed Feb. 14, 1972.
39. Bettinger. Spectacle-mounted ocular display apparatus. Filed Jul 6, 1987.
40. Spitzer M.B. “Eyeglass-Based Systems For Wearable Computing”. In Proc. First International Symposium on
Wearable Computers (ISWC 1997), 13-14 October 1997, Cambridge, Massachusetts, USA. IEEE Computer
Society. ISBN 0-8186-8192-6.
Proc. of SPIE Vol. 5638 377