ArticlePDF Available

The past, present, and future of head-mounted display designs


Abstract and Figures

Head-mounted displays present a relatively mature option for augmenting the visual field of a potentially mobile user. Ideally, one would wish for such capability to exist without the need to wear any view-aided device. However, unless a display system could be created in space, anywhere and anytime, a simple solution is to wear the display. We review in this paper the fundamentals of head-mounted displays including image sources and HMD optical designs. We further point out promising research directions that will play a key role towards the seamless integration between the virtually superimposed computer graphics objects and the tangible world around us.
Content may be subject to copyright.
The Past, Present, and Future of Head Mounted Display Designs
Jannick Rolland* and Ozan Cakmakci
College of Optics and Photonics: CREOL & FPCE, University of Central Florida
Head-mounted displays present a relatively mature option for augmenting the visual field of a potentially mobile user.
Ideally, one would wish for such capability to exist without the need to wear any view-aided device. However, unless a
display system could be created in space, anywhere and anytime, a simple solution is to wear the display. We review in
this paper the fundamentals of head-mounted displays including image sources and HMD optical designs. We further
point out promising research directions that will play a key role towards the seamless integration between the virtually
superimposed computer graphics objects and the tangible world around us.
Keywords: Displays, Mobile Displays, Wearable Displays, Optical System Design, Head-Mounted Displays, Head
Worn Displays
Per eye, a HMD is composed of a modulated light source with drive electronics viewed through an optical system
which, combined with a housing, are worn on a user’s head via a headband, helmet, or around an eyeglasses frame.
Emerging technologies include various microdisplay devices, miniature modulated laser light and associated scanners,
miniature projection optics replacing eyepiece optics, all contributing to unique breakthroughs in HMD optics. Because
the source of image formation is critical to the optical design, we shall review in Section 2 various forms of
microdisplay sources, followed by key optical aspects of HMDs. In Section 3, we will discuss HMD optics design. In
Section 4, we will focus on novel emerging technologies, the head-mounted projection display, the occlusion display,
and eyeglasses displays.
2.1 Microdisplay Sources
In early HMDs, miniature monochrome CRTs were primarily employed. A few technologies implemented color field-
sequential CRTs. Then, VGA (i.e. 640x480 color pixels) resolution Active-Matrix Liquid-Crystal-Displays (AM-LCDs)
became the source of choice. Today, SVGA (i.e. 800x600 color pixels) and XGA (i.e. 1280x1024 color pixels)
resolution LCDs, Ferroelectric Liquid Crystal on Silicon (FLCOS),1 Organic Light Emitting Displays (OLEDs), and
Time Multiplex Optical Shutter (TMOS) are coming to market for implementation in HMDs. Table 1 shows a
comparison of various miniature display technologies, or microdisplays. The challenge in developing microdisplays for
HMDs is providing high resolution on a reasonably sized yet not too large substrate (i.e. ~0.6-1.3 inches), and high
uniform luminance which is measured either in footLambert (fL) or Candelas per square meter (Cd/m2) (i.e. 1 Cd/m2
equals to 0.29 fL). An alternative to bright microdisplays is to attenuate in part the scene luminance as has been
commonly done in the simulator industry since its inception. Such alternative may not be an option for surgical
displays. FLCOS displays, which operate in reflection and can be thought of as reflective light modulators, can be
brightly illuminated in telecentric mode; however, innovative illumination schemes must be developed to offer compact
solutions. OLEDs use polymers that emit light when an electrical current is passed through. Their brightness can be
competitive with FLCOS displays, however at the expense of a shorter life span. Another important characteristic often
underplayed in microdisplays is the pixel response time, which if slow, can lead to increasing latency.2The TMOS
technology functions in a field sequential mode by feeding the three primary colors in rapid alternating succession to a
single light-modulating element. Unlike LCD technology that uses color filters, the color is emitted directly from the
panel. Opening and closing of the light modulator allows the desired amount of each primary color to be transmitted.
*; phone:407-823-6870; fax:407-823-6880
Invited Paper
Optical Design and Testing II, edited by Yongtian Wang,
Zhicheng Weng, Shenghua Ye, José M. Sasián, Proc. of SPIE Vol. 5638
(SPIE, Bellingham, WA, 2005) · 0277-786X/05/$15 · doi: 10.1117/12.575697
Downloaded From: on 05/03/2017 Terms of Use:
Table 1: Microdisplays (< 1.5 inch diagonal) for HMDs
Diagonal Size (inch) > 0.5 > 0.7 > 0.6 > 0.66 > 0.5
Life Span (Hours) 40,000 20,000 -
10,000 - 15,000 <10,000 >100,000
htness (cd/m2 or
~100 <100 300 - 1000 100 - 700 200 - 1000
Contrast ratio 300:1 - 700:1 150:1 - 450:1 Up to 2000:1 150:1 - 450:1 300:1 - 4500:1
Type of Illumination Raster
Time Multiplex
Optical Shutter
Pixel Response Time Phosphor
1-30ms* 1-100us <1ms 0.1 -100µs
Colors 16.7M 16.7M 16.7M 16.7M 16.7M
* sub-ms may be obtained using dual-frequency materials
2.2 Image Presentation
Perhaps surprisingly, many deployed VE systems present either monocular or the same images to both eyes. Such
systems require neither change in accommodation nor convergence. Accommodation is the act of changing the power
of the crystalline lens to bring objects in focus. Convergence is the act of bringing the lines of sight of the eyes inward
or outward when viewing near or far objects. In our daily experience, while we are gazing at scenes, our eyes focus and
converge at the same point. Thus, to avoid side effects, HMD systems need to stay within acceptable limits of
accommodation-convergence mismatch, approximately within +-¼ of a diopter.3-4 In monocular or biocular HMDs,
users accommodate at the location of the optically formed images to obtain the sharpest images. In the case of binocular
HMDs, the eyes will converge properly at the 3D location of a 3D object to avoid diplopic (i.e. doubled) vision, while
the images will appear blurred if their optical location, which lies on a single surface in current HMDs, does not fall
within the depth of field of the display optics around the image location.
In practice, driven by far field and near field applications, the unique distance of the optical images can be set either
beyond 6m (i.e. optical infinity), or at about an arm’s length, respectively. Objects within the optics depth of field at a
specific setting will be perceived sharply. Other objects will be perceived blurred. For dual near-far field applications,
multifocal planes displays are necessary.5
2.3 Nonpupil versus Pupil Forming Systems
Three current basic forms of optical design for HMDs are eyepiece, objective-eyepiece combination, and projection
optics. Only the simple eyepiece design is non-pupil forming, because it requires no intermediary image surface
conjugate to the microdisplay within the optics. In this case, the eyes pupils serve as the pupil of the HMD. For each
eye of a user, as long as a possible light path exists between any point on the microdisplay and the eye, the user will see
the virtual image of that point. An advantage of non-pupil forming systems is the large eye-location volume provided
behind the optics. Their main disadvantage is the difficulty in folding the optical path with a beam splitter or a prism
without making a significant trade-off in field-of-view. Unfolded optics prohibits see-through capability and balancing
the weight of the optics around the head.
Pupil forming systems on the other hand consist of optics with an internal aperture which is typically conjugated to the
eye pupils. A mismatch in conjugates will cause part or the entire virtual image to disappear, and therefore large enough
pupils must be designed. The requirements for pupil size should be tightly coupled with the overall weight, ergonomics
Proc. of SPIE Vol. 5638 369
Downloaded From: on 05/03/2017 Terms of Use:
of the system, field of view, and optomechanical design. Ideally, 15-17 mm pupils are preferred to allow natural eye
movements, however 10 mm pupils have also been designed successfully (e.g. the Army’s IHADSS HMD), and as
small as 3 mm binoculars are commonly designed.
2.4 Telecentricity Requirement
Whether in object or image space, telecentric optics operates with a pupil at optical infinity in that space. In the
telecentric space, the chief rays (i.e. the rays from any point on the microdisplay that pass through the center of the
pupil) are parallel to the optical axis. Telecentricity in microdisplay space is desirable to maximize uniform illumination
across the visual field, however it is not necessarily true because many microdisplays exhibit asymmetry off-axis.
Telecentricity also further imposes that the lens aperture be at least the same size as the microdisplay, which has to be
balanced against the weight constraint. A relaxed telecentric condition is often successfully applied in HMD design.
3.1 Immersive versus See-through Designs
HMD designs may be classified as immersive or see-through. While immersive optics refer to designs that block the
direct real-world view, see-through optics refer to designs that allow augmentation of synthetic images onto the real
world.6 Whether immersive or see-through, the optical path may or may not be folded. Ideally, immersive HMDs
target to match the image characteristics of the human visual system. Because it is extremely challenging to design
immersive displays to match both the FOV and the visual acuity of human eyes, tradeoffs are often made. The LEEP
optics was the first large FOV non-pupil forming optics extensively used in the pioneering times of VEs.7The optics
used a non-folded design type. The classical Erfle eyepiece design and other eyepiece designs are shown in the first
three lines of Table 2.
See-through designs more often follow a folded design, particularly optical see-through displays. In such displays, the
optical combiner is a key component in distinguishing designs. In folded designs, the center of mass can be moved more
easily back. Folded designs however, often indicate optical system complexity. A large majority of folded designs use a
dual combiner, where reflections off a flat plate and a spherical mirror combined are used as shown in the second line of
Table 2. Droessler and Rotier used a combination of dual combiner and off axis optics in the tilted cat combiner. In
Antier, various key HMD components were assembled, including a pancake window element close to the eye enabling a
wide FOV eyepiece. The drawback of pancake windows has been their low transmittance of approximately 1-2%,8
however recent advances yield pancake windows with up to 20% transmittance,9 Finally, off-axis optics designs with
toroidal combiners have also been designed, two examples being shown in the last row of Table 2. The use of a toroid
combiner serves to minimize the large amount of astigmatism introduced when tilting a spherical mirror.
3.2 Balancing Field of View and Resolution
Three main approaches have been investigated to increase FOV while maintaining high resolution: high-resolution
insets, partial binocular overlap, and tiling.10-12
3.3 Achieving High-Brightness Displays
Alternatives to microdisplays are laser or laser-diode based scanning displays, which offer brighter displays and target
applications in the outdoor and medical domains. A recent approach is The Virtual Retinal Display (VRD), also called
the Retinal Scanning Display (RSD).13 In such systems, the pupil of the eyes is optically conjugated to the
microscanner exit pupil. As such, a challenge revealed early in the development of the technology was the small exit
pupil (i.e. 1~3mm) within which the eye needed to be located to see the image, which can be overcome by forming an
intermediary image followed by a pupil expander. Many devices have used a projection device, a screen, and an
eyepiece magnifier to expand the viewing volume. The NASA shuttle mission simulator (SMS) rear window is a prime
example of the technology. Controlled angle diffusers have been designed for pupil expansion in HMDs, including
diffractive exit-pupil expanders.14 Given an intermediary image, the VRD also functions with an equivalent
370 Proc. of SPIE Vol. 5638
Downloaded From: on 05/03/2017 Terms of Use:
microdisplay in this case formed using scanned laser light. Thus, optically, the VRD closely approaches other HMD
A recent technology based on scanned laser light is the optical CRT.15 In this approach, a single infrared laser diode is
used and scanned across a polymer thin plate doped with microcrystals. Optical upconversion is used to have the
microcrystal emit light in the red, green, and blue regions of the spectrum. Such technologies built from pioneer work of
Nicolaas Bloembergen..16 The advantage of using a laser diode as opposed to a laser is the suppression of speckle noise.
Table 2: Examples of key HMD optics design forms
Proc. of SPIE Vol. 5638 371
Downloaded From: on 05/03/2017 Terms of Use:
Due to the wide application range, HMDs must be designed for specific tasks. Beside military applications which
dominated the market of HMDs for several decades,17 recent applications include medical, industrial design, visual aid
for daily living, manufacturing as well as distributed collaborative environments.18-21 In this section, we shall discuss
two types of novel HMDs that have yield recent early prototypes, head-mounted projection systems (HMPDs) and
occlusion displays. Other emerging displays in development are multifocal HMDs,5 and eyetracking integrated
4.1 Head Mounted Projection Displays (HMPDs)
A shift in paradigm in HMD design is the replacement of compound eyepieces with projection optics combined with
phase conjugate material (e.g. retroreflective optical material), known as head-mounted projection displays (HMPD).26-
27 A HMPD consists of a pair of miniature projection lenses, beam splitters, and microdisplays mounted on the head as
shown in Fig. 1a and non-distorting retro-reflective sheeting material placed strategically in the environment. Fig.1b
shows a deployable room coated with retro-reflective material known as the Artificial Reality Center (ARC).28 A user
interacting with 3D medical models is shown in Fig. 1c, and a recent side-mounted optics version of the HMPD is
shown in Fig. 1d. Other implementations of retroreflective rooms have been developed.29
Projection optics, as opposed to eyepiece optics, and a
retroreflective screen, instead of a diffusing screen, both
respectively distinguish the HMPD technology from
conventional HMDs and stereoscopic projection systems.
Given a FOV, projection optics can be more easily corrected
for optical aberrations, including distortion, and does not
scale with increased FOV, given the internal pupil to the lens
which is nevertheless re-imaged at the eye via the
beamsplitter oriented at 90ofrom that used in conventional
eyepiece folded optics. The optical design of a 52oFOV
projection optics is shown in Fig.2. 30-31
(a) (b) (c) (d)
Figure 1: HMPD in use in a deployable Augmented Reality Center (ARC): (a) user wearing a HMPD; (b) the ARC; (c) a
user interacting with 3D models in the ARC; and (d) side-mounted optics HMPD
(a) (b)
Figure 2: (a) Optical layout of the 52o FOV ultra-
light projection lens showing the diffractive optical
element (DOE) surface and the aspheric surface
(ASP) ; (b) the 52 o optical lens assembly and size.
4.2 Occlusion Displays
Augmented reality application developers and researchers often choose between optical and video see-through mode
displays. A thorough multi-dimensional comparison between the two modes is provided in Rolland (2001).6 Briefly,
many scientists prefer the video see-through mode simply because it is relatively easy to implement occlusions on a
pixel-by-pixel basis. However, video see-through displays potentially suffer from lower resolution of the real world
scene due to subsampling through the cameras, lag due to processing, and the requirement to match the viewpoint of the
eye with the viewpoint of the cameras. Given these drawbacks, it is desirable to choose optical see-through displays if
372 Proc. of SPIE Vol. 5638
Downloaded From: on 05/03/2017 Terms of Use:
they can provide occlusion capability. In the rest of this section we will present the approaches taken to that end.
Occlusion is a strong monocular cue to depth perception and may be required for certain applications. 32
For optical see-through displays, starting with Sutherland’s original head-worn display,33 most conventional optical
designs, even today, will combine computer generated imagery with the real world using a beam splitter.34 Regardless
of the transmittance and reflectance percentages of the beam splitter, the consequence is that some percentage of light
will always be transmitted. Therefore, it is difficult to achieve opaque display of virtual objects that can block the real
world scene, unless the image sources are much brighter than the scene. Alternative mechanisms to the conventional
head-mounted display designs become necessary.
A first order approach to achieving opaque objects could be to dim the light from the scene uniformly across the field of
view of the optics. Liquid crystal shutters, under voltage control, have been used to dim the light from the scene and the
modulated output is combined with the image source. It is conceivable to use electrochromic films to control light levels
from the scene under current control in a similar way as with the liquid crystal shutter, however, eliminating the crossed
polarizers. Finer grained control over regions within the scene requires masks with multiple pixels. A review of early
seeds of occlusion display was provided in Cakmakci et al. (2004).35 The most developed prototype to date is the
ELMO-4 by Kiyokawa
A compact geometry that is capable of mutuable occlusions and suitable for a see-through head-worn display is shown
in Fig.3 Polarizing optics and the use a reflective spatial light modulator are the key to achieving a compact geometry.
As depicted graphically in the figure, this system consists of an objective lens, a polarizer, an x-cube prism, a reflective
SLM (e.g., LCOS or DMD), a microdisplay as the image source, and an eyepiece.
The objective lens images the scene onto the SLM telecentrically. The SLM can be modeled as a flat mirror and a
quarter wave plate, in the case of double pass, would rotate linearly polarized light 90 degrees. After the scene is
modulated with the SLM, the modulated output is combined with the microdisplay output using the x-cube prism. The
final combined output is collimated with the eyepiece and delivered to the users’ eye. The field of view of the objective
lens matches the FOV of the eyepiece to ensure unit angular magnification. There will be no distortion for the real scene
in this system due to the symmetry.
Figure 3: First order optical layout of a compact occlusion display
The eye is conjugated to the entrance pupil of the head-worn display and this will cause a viewpoint offset shift of about
3 inches for the recent system we designed. The viewpoint offset shift may impact proprioception when the user
interacts with near field real-world objects.
We now verify the final image will have the desired upright orientation with respect to the eye. This makes clear how
polarizing optics yields a compact geometry without the need for erection optics. The diagram pertinent to verifying
image orientation is shown in Fig 4. The object is indicated with an upright arrow and it is assumed to have an initial
upright orientation. The object is first imaged through the objective lens and has an inverted orientation as indicated in
orientation at step “1” with a solid black line shown in Fig. 4(a). Due to the polarizer, right after the lens, the light will
Proc. of SPIE Vol. 5638 373
Downloaded From: on 05/03/2017 Terms of Use:
be s-polarized, therefore, it will hit the s-reflect coating in the x-cube prism. The orientation upon reflection is shown in
step “2” represented in Fig.4(b) as a solid black line close to the SLM. The SLM will reflect the image and change the
polarization, assuming the pixel is “turned on”.
(a) (b) (c)
Figure 4: Verification of upright image orientation
Caused by this change of polarization, the light will now be p-polarized and therefore hit the p- reflect coating on the x-
cube and will be directed towards the eye as shown in Fig.4(c). The orientation after the p-reflect mirror is shown in
step “3” of Fig.4(c), the final step in the analysis. We can clearly verify that the final image will have an upright
We created a table of specifications for a prototype design implementation. The specifications for the objective and the
eyepiece, which are the same element by design, will be provided in this section. The goal of the objective is to image a
specified field of view on to the SLM. The FOV has been set to 40 degrees full field. The system is designed with a
9mm pupil. The focal length is set to 30.7mm, based on the diagonal length of the LCOS. The horizontal and the
vertical FOV are set to +-15.81 degrees and +-12.77 degrees respectively. The pixel period is on the order of 30
microns, therefore, the maximum spatial frequency will be ~37 cycle/mm. Shown in Fig.5(a) is the layout of a recent
design based on these specifications. The performance characteristics are summarized in Fig.5(b) which shows the
modulation transfer function. This design achieves an above 50% modulation transfer function value at the maximum
spatial frequency of ~37lp/mm. The distortion of the lens is ~5% for the virtual scene which can be corrected in
hardware or software.
7.50 15.00 22.50 30.00
New lens from CVMACR
R 0.5 FIELD ( )10.00O
R 0.7 FIELD ( )14.00O
R 0.8 FIELD ( )17.00O
R 1.0 FIELD ( )20.00O
608.9 NM 1
559.0 NM 2
513.9 NM 1
(a) (b)
Figure 5: (a) Layout of the objective lens (b) Modulation Transfer Function
Before building custom optics for the design shown in Fig.5(a), we instrumented a prototyped with commercially
available components to check feasibility of this approach shown in Fig 6. The experimental setup consists of a light
source, a transparency, a diffuser screen, an achromatic lens, a polarizing beam splitter, a liquid crystal shutter, an
LCOS device. We used an additional lens to act as a weak magnifier to assist in taking pictures.
374 Proc. of SPIE Vol. 5638
Downloaded From: on 05/03/2017 Terms of Use:
(a) (b) (c) (d)
Figure 6: from left to right, (a) Optical setup; (b) Original Transparency (c) Field of view of the optics (d) Scene as imaged onto the
LCOS and reflected back (no modulation)
Fig.6 (d) is a photograph of the optical image as would be seen through the head-worn display, with no modulation (no
occlusion) on the original scene. For comparison purposes, Fig.6(c) is a photoshop scaled version of the region of
interest shown in Fig.6(b), therefore, it looks slightly pixelated. In the basic setup, we are imaging a relatively small
field of view and also lens 2 is hardly magnifying the image. The significance of the result is that we can form an
optical image of the scene on the F-LCOS and modulate it for occlusion.
(a) (b)
Figure 7: (a) Modulating mask (b) Modulated scene
Fig.7(a) shows the mask signal that will modulate the scene. Fig.7(b) shows the an image of the mask seen through the
lens 2 on the F-LCOS superimposed on the image and in best focus we achieved (within the digital camera capability
that we used to take the pictures). We can observe that head of the child is blocked according to the mask, which can
have practically any shape and can be updated at video rates. This first result, which points to the promise of this new
technology, also points to the need to further work on the engineering aspects of the system to improve the contrast ratio
of the mask which appears to be scene illumination dependent. Finally, such displays will benefit from the coupling of
3d real-time depth extraction for the creation of occlusion masks.
4.3 Eyeglasses based displays
A number of factors including aesthetics and social acceptance will push displays targeting daily visual aids towards
integration with the eyeglasses form factor. It is extremely challenging to fulfill high-performance optical requirements
within this form factor. However, starting with text based interfaces (i.e., time of day, email, notetaking applications,
etc.), we can expect these displays to slowly carve their way into supporting wider fields of view and resolution for
graphical tasks. Upton, in the mid 60’s and 70’s, integrated display systems within eyeglasses, for applications in
speech interpretation assistance. Initial prototypes were based on energization of small lights or lamps mounted directly
on the surface of an eyeglass lens.37 A later prototype around early 70’s embodying small reflecting mirrors on the lens
of the eyeglasses and moving the direct mounting of the light sources away from the lens along with results in being less
noticeable and less obstructive to the wearer’s vision.38 In late 80’s, Bettinger developed a spectacle mounted display
in which the spherical reflective surface of a partially transparent eyeglass lens is employed.39 There has been recent
work in embedding the mirrors into the eyeglasses lens by Spitzer and colleagues.40 Spitzer decided that based on
the ~20x practical magnification of a single lens and their image goals of 28x21cm at 60cm, they would need a 0.7”
Proc. of SPIE Vol. 5638 375
Downloaded From: on 05/03/2017 Terms of Use:
display which they concluded would be too large for concealment in eyeglasses. Therefore, they concentrated on a relay
system built into the eyeglasses frame to move the microdisplay away from the eyeglasses in their initial prototype.
They have demonstrated a system with the overall thickness of the eyeglasses lens less than 6.5mm which fits in the
commercial eyeglass frame.
While since the 1960’s military simulators have driven HMD designs with key tasks in far field visualization with
collimated optics, many other applications from medical to education have emerged that are driving new concepts for
HMDs across multiple tasks up to near field visualization. Today, no HMD allows coupling of eye accommodation and
convergence as one may experience in the real world, yet only few HMDs provide either high resolution or large FOVs,
and no HMD allows correct occlusion of real and virtual objects. HMD design is extremely challenging because of
Mother Nature who gave us such powerful vision in the real world on such a complex, yet small network called our
brains. New constructs and emerging technologies allow us to design yet more and more advanced HMDs year by year.
It is only a beginning. An exciting era of new technologies is about to emerge driven by mobile wearable displays as it
applies to our daily lives in the same way portable phones are glued to the ears of billions of people, as well as to high
tech applications such as medical, deployable military systems, and distributed training and education.
This research was supported by National Science Foundation IIS/HCI-0307189 and the Office of Naval Research
N00014-02-1-0927. This invited paper summarizes sections with updated components of a book chapter from J. Rolland and H. Hua
in the Encyclopedia of Optical Engineering,12 and a paper of O. Cakmaki, Y. Ha, and J. Rolland in the Proceedings of ISMAR2004.35
1. Wu, S.T., and Deng-Ke Yang. Reflective Liquid Crystal Displays. New York, Publisher: John Wiley, June 2001.
2. Adelstein, B.D, Thomas G. Lee, and Stephen R. Ellis. Head tracking latency in virtual environments:
psychophysics and a model. Proceedings of the Human Factors and Ergonomics Society 47th Annual Meeting
2003, 2083-2087.
3. Wann, J.P., S. Rushton, and M. Mon-Williams. Natural problems for stereoscopic depth perception in virtual
environments. Vis. Res. 1995,35, 2731-2736.
4. Rolland, J.P.; C. Meyer, K, Arthur, and E. Rinalducci. Methods of adjustments versus method of constant stimuli in
the quantification of accuracy and precision of rendered depth in head-mounted displays. Presence: Teleoperators
and Virtual Environments 2002,11(6), 610-625.
5. Rolland, J. P., M. Krueger, and A. Goon, "Multi-focal planes in head-mounted displays," Applied Optics 39(19),
3209-3215 (2000).
6. Rolland, J. P.; Fuchs, H. Optical versus video see-through head-mounted displays. In Wearable Computers and
Augmented Reality. Caudell, T., Barfield, W. (Eds). Erlbaum, 2001.
7. Howlett, E. M. (1983). Wide angle color photography method and system. U.S. Patent Number 4,406,532.
8. La Russa, J.A., “Image forming apparatus,” US Patent 3,943,203 (1976).
9. Berman A.L., and Meltzer J.E., “Optical collimating apparatus,” US Patent 4,859,031 (1989)
10. Melzer, J.E. Overcoming the field of view: resolution invariant in head mounted displays. Proc. of SPIE, Vol 3362,
Helmet- and Head-Mounted Displays III, R.J. Lewandowski, L.A. Haworth, and H.J. Girolamo (eds), 284-293,
11. Grigsby S.S.; B.H. Tsou. Visual factors in the design of partial overlap binocular helmet-mounted displays. Society
for Information Displays International Symposium Digest of Technical Papers, Vol. XXVI, (1993).
12. Rolland, J.P., and H. Hua. Displays: Head-Mounted. In Encyclopedia of Optical Engineering (2005) (In press).
13. Urey, H. Retinal Scanning Displays. In Encyclopedia of Optical Engineering. Driggers, R. Ed., Marcel Dekker,
Inc., 2003.
14. Urey, H. Diffractive Exit Pupil Expander for Display Applications. Applied Optics 2001,40(32), 5840-5851.
376 Proc. of SPIE Vol. 5638
Downloaded From: on 05/03/2017 Terms of Use:
15. Bass, M.; H. Jenssen. Display medium using emitting particles dispersed in a transparent host. 6327074B1 (2001)
and 6501590B2 (2002).
16. Bloembergen, N. Solid state infrared quantum counters. Physical Review Letters 1959, 2(3), 84-85.
17. Rash, C. E. (Eds.) Helmet-Mounted Displays: Design Issues for Rotary-Wing Aircraft. SPIE Press PM:
Bellingham, 2001.
18. Caudell, T., Barfield, W. (eds.) Wearable Computers and Augmented Reality. Erlbaum, 2001.
19. Stanney K.M. (Ed.), Handbook of Virtual Environments: Design, implementation, and applications; Lawrence
Erlbaum Associates, Mahwah, New Jersey, 2002.
20. Ong, S.K., Nee, A. Y. C. (Eds.) Virtual and augmented reality applications in manufacturing; Springer-Verlag
London Ltd, June, 2004.
21. Ohata Y., and H. Tamura (Eds). Mixed Reality: merging real and virtual worlds. Co-published by Ohmsha and
Springer-Verlag, 1999.
22. Iwamoto, K.; Katsumata, S.; Tanie, K. An Eye Movement Tracking Type Head Mounted Display for Virtual
Reality System - Evaluation Experiments of Prototype System, Proceedings of IEEE International Conference on
Systems, Man and Cybernetics (SMC94), pp. 13-18, 1994.
23. Rolland, J.P.; A. Yoshida; L. Davis; J.H. Reif. High resolution inset head-mounted display. Applied Optics 1998,
37(19), 4183-4193.
24. Rolland, J.P., Y.Ha, and Cali Fodopiastis. Albertian errors in head-mounted displays: choice of eyepoints location
for a near or far field task visualization. JOSA A 2004,21(6).
25. Vaissie, L.; Rolland, J. P. Eye-tracking integration in head-mounted displays. U.S. Patent 6,433,760B1, August 13,
26. Fisher, R. Head-mounted projection display system featuring beam splitter and method of making same. US Patent
5,572,229, November 5, 1996.
27. Kijima R.; Ojika, T., Transition between virtual environment and workstation environment, International
Symposium, 1997, In: Proceedings of IEEE Virtual Reality Annual International Symposium, IEEE Comput. Soc.
Press, Los Alamitos, CA, 130-137.
28. Davis L.; Rolland J.; Hamza-Lup F.; Ha Y; Norfleet J.; Pettitt B.; Imielinska C. “Alice’s Adventures in
Wonderland: A Unique Technology Enabling a Continuum of Virtual Environment Experiences,” IEEE Computer
Graphics and Applications, 2003, February, 10-12.
29. Hua, H.; Brown, L.; Gao, C. SCAPE: supporting stereoscopic collaboration in augmented and projective
environments. IEEE Computer Graphics and Applications 2004, January/February, 66-75.
30. Hua, H; Ha, Y; Rolland, J.P. Design of an ultra-light and compact projection lenses. Applied Optics, 2003,42: 97-
31. Ha, Y.; J. P. Rolland. Optical Assessment of Head-Mounted Displays in Visual Space. Applied Optics 2002,
41(25), 5282-5289.
32. Cutting, J.E. and P.M. Vishton (1995). “Perceiving the layout and knowing distances:the integration, relative
potency, and contextual use of different information about depth,” Perception of Space and Motion, ed. By W.
Epstein and S. Rogers, Academic Press, 69-117.
33. Sutherland, I. E. A head-mounted three-dimensional display. Fall Joint Computer Conference, AFIPS Conference
Proceedings, 1968,Vol. 33, 757-764.
34. Melzer, J. E.; Moffit, K. (Eds.) Head Mounted Displays. McGraw-Hill: New York, 1997.
35. Cakmakci, O; Y. Ha, and J.P. Rolland, “A Compact Optical See-through Head-Worn Display with Occlusion
Support. Proceedings of ISMAR 2004, pp16-25
36. Kiyokawa, K; M. Billinghurst; B. Campbell; Eric Woods. “An occlusion-capable optical see-through headmount
display for supporting co-located collaboration. Proceedings of 2003 International Symposium on Mixed and
Augmented Reality: 133-141, 2003.
37. Upton. H.W. Speech and Sound Display System. US3,463,885. Filed Oct. 22, 1965.
38. Upton. H.W. Eyeglass mounted visual display. US3,936,605. Filed Feb. 14, 1972.
39. Bettinger. Spectacle-mounted ocular display apparatus. Filed Jul 6, 1987.
40. Spitzer M.B. “Eyeglass-Based Systems For Wearable Computing”. In Proc. First International Symposium on
Wearable Computers (ISWC 1997), 13-14 October 1997, Cambridge, Massachusetts, USA. IEEE Computer
Society. ISBN 0-8186-8192-6.
Proc. of SPIE Vol. 5638 377
Downloaded From: on 05/03/2017 Terms of Use:
... Mais de nombreuses variantes existent pour améliorer la qualité image, augmenter le champ ou encore rendre le système plus compact. Une revue des possibilités, datant d'il y a 16 ans, donne un aperçu de la diversité des systèmes imaginés pour la RA [36]. ...
Suite au travail réalisé au CEA-Leti sur les capteurs courbes, et l’obtention de systèmes ultra compacts grâce à cette technologie, ce travail de thèse explore l’utilisation de microécran courbes dans des systèmes visuels pour les rendre plus compacts. Une étude théorique des apports de la courbure aux systèmes visuels ainsi qu’aux systèmes freeform est développée. La courbure permet à la fois de simplifier la correction de l’aberration de courbure de champ, mais aussi de la distorsion. Par ailleurs, la courbure permet aussi d’augmenter le flux lumineux transmis depuis le microécran au système optique, avec des gains allant jusqu’à 20 % dans les cas favorables. Par rapport aux systèmes optiques freeforms, la courbure permet de libérer des degrés de libertés dans la conception des systèmes, ce qui peut améliorer la compacité du système final, sa qualité image ou en simplifier la fabrication. L’un des avantages significatifs, c’est que ces apports ne s’excluent pas l’un l’autre.Suite à ces études théoriques, deux systèmes optiques sont conçus, dont l’un est fabriqué et caractérisé. Le premier système est un viseur à Réalité Augmentée, qui peut être intégré dans un appareil photographique, et composé de trois prismes freeforms collés. Ce système démontre les apports de la courbure à la fois en qualité image, en volume et en flux lumineux. Le second système, celui qui est fabriqué, est un viseur électronique à symétrie de révolution qui démontre les apports de la courbure en compacité ainsi qu’en difficulté de fabrication. Une caractérisation de ce second système est réalisée, qui inclus une mesure de contraste ainsi qu’un test visuel.
... Since then, VR has evolved gradually until the first half of the 21st century [6]. The evolution of VR led to significant research in the issues related to diverse applications as one may experience in the real world despite continued innovation [7]. Some researchers explored the possibilities of HMDs as a tool for interactive education for both teaching and learning. ...
... HMD is a visualization device made up by combination of the optical, mechanical, and electronic systems in a very compact area. These devices consists of a modulated light source with an electronic drive system, optomechanical assemblies such as housing, helmet, or frames of eyeglasses [57]. The pilots use the HMD visualizing every information of the aircraft position, altitude coordinates, various indicators at one display without disturbing to see through that optical system. ...
Full-text available
Freeform optics has become the most prominent element of the optics industry. Advanced freeform optical designs supplementary to ultra-precision manufacturing and metrology techniques have upgraded the lifestyle, thinking, and observing power of existing humans. Imaginations related to space explorations, portability, accessibility have also witnessed sensible in today's time with freeform optics. Present-day design methods and fabrications techniques applicable in the development of freeform optics and the market requirements are focussed and explained with the help of traditional and non-traditional optical applications. Over the years, significant research is performed in the emerging field of freeform optics, but no standards are established yet in terms of tolerances and definitions. We critically review the optical design methods for freeform optics considering the image forming and non-image forming applications. Numerous subtractive manufacturing technologies including figure correction methods and metrology have been developed to fabricate extreme modern freeform optics to satisfy the demands of various applications such as space, astronomy, earth science, defence, biomedical, material processing, surveillance, and many more. We described a variety of advanced technologies in manufacturing and metrology for novel freeform optics. Next, we also covered the manufacturing-oriented design scheme for advanced optics. We conclude this review with an outlook on the future of freeform optics design, manufacturing and metrology.
... Augmented reality (AR) head-mounted displays (HMD) allow the real-time visualization in front of the user's eyes of additional virtual information within the real environment for an enhanced and user interactive experience in different fields of application [1,2]. ...
Full-text available
In the context of image-guided surgery, augmented reality (AR) represents a ground-breaking enticing improvement, mostly when paired with wearability in the case of open surgery. Commercially available AR head-mounted displays (HMDs), designed for general purposes, are increasingly used outside their indications to develop surgical guidance applications with the ambition to demonstrate the potential of AR in surgery. The applications proposed in the literature underline the hunger for AR-guidance in the surgical room together with the limitations that hinder commercial HMDs from being the answer to such a need. The medical domain demands specifically developed devices that address, together with ergonomics, the achievement of surgical accuracy objectives and compliance with medical device regulations. In the framework of an EU Horizon2020 project, a hybrid video and optical see-through augmented reality headset paired with a software architecture, both specifically designed to be seamlessly integrated into the surgical workflow, has been developed. In this paper, the overall architecture of the system is described. The developed AR HMD surgical navigation platform was positively tested on seven patients to aid the surgeon while performing Le Fort 1 osteotomy in cranio-maxillofacial surgery, demonstrating the value of the hybrid approach and the safety and usability of the navigation platform.
... Optical see-through systems are considered a superior structure since the user can see the real world through an optical system as well as the virtual content. These systems benefit from both the transmissive and reflective properties of the optical combiners [28]. ...
Throughout the last decade, augmented reality (AR) head-mounted displays (HMDs) have gradually become a substantial part of modern life, with increasing applications ranging from gaming and driver assistance to medical training. Owing to the tremendous progress in miniaturized displays, cameras, and sensors, HMDs are now used for the diagnosis, treatment, and follow-up of several eye diseases. In this review, we discuss the current state-of-the-art as well as potential uses of AR in ophthalmology. This review includes the following topics: (i) underlying optical technologies, displays and trackers, holography, and adaptive optics; (ii) accommodation, 3D vision, and related problems such as presbyopia, amblyopia, strabismus, and refractive errors; (iii) AR technologies in lens and corneal disorders, in particular cataract and keratoconus; (iv) AR technologies in retinal disorders including age-related macular degeneration (AMD), glaucoma, color blindness, and vision simulators developed for other types of low-vision patients.
... OST displays use semi-transparent surfaces (i.e., optical combiners) to optically combine the computer-generated content with the real view of the world (Holliman et al., 2011). The virtual content is rendered on a two-dimensional (2D) microdisplay placed outside the user's fov and collimation lenses are placed between the microdisplay and the optical combiner to focus the virtual 2D image so that it appears at a pre-defined and comfortable viewing distance on a virtual image plane (i.e., the display focal plane) (Rolland and Cakmakci, 2005). ...
Full-text available
Optical see-through (OST) augmented reality head-mounted displays are quickly emerging as a key asset in several application fields but their ability to profitably assist high precision activities in the peripersonal space is still sub-optimal due to the calibration procedure required to properly model the user's viewpoint through the see-through display. In this work, we demonstrate the beneficial impact, on the parallax-related AR misregistration, of the use of optical see-through displays whose optical engines collimate the computer-generated image at a depth close to the fixation point of the user in the peripersonal space. To estimate the projection parameters of the OST display for a generic viewpoint position, our strategy relies on a dedicated parameterization of the virtual rendering camera based on a calibration routine that exploits photogrammetry techniques. We model the registration error due to the viewpoint shift and we validate it on an OST display with short focal distance. The results of the tests demonstrate that with our strategy the parallax-related registration error is submillimetric provided that the scene under observation stays within a suitable view volume that falls in a ±10 cm depth range around the focal plane of the display. This finding will pave the way to the development of new multi-focal models of OST HMDs specifically conceived to aid high-precision manual tasks in the peripersonal space.
... In this way, the user perceives both the external scene light and the light emitted from the microdisplay offset from the viewing region. Collimation lenses are placed between the microdisplay and the optical combiner to focus the virtual 2D image so that it appears at a pre-defined and comfortable viewing distance on a virtual focal plane [19,33]. ...
Egocentric augmented reality (AR) interfaces are quickly becoming a key asset for assisting high precision activities in the peripersonal space in several application fields. In these applications, accurate and robust registration of computer-generated information to the real scene is hard to achieve with traditional Optical See-Through (OST) displays given that it relies on the accurate calibration of the combined eye-display projection model. The calibration is required to efficiently estimate the projection parameters of the pinhole model that encapsulate the optical features of the display and whose values vary according to the position of the user's eye. In this work, we describe an approach that prevents any parallax-related AR misregistration at a pre-defined working distance in OST displays with infinity focus; our strategy relies on the use of a magnifier placed in front of the OST display, and features a proper parameterization of the virtual rendering camera achieved through a dedicated calibration procedure that accounts for the contribution of the magnifier. We model the registration error due to the viewpoint parallax outside the ideal working distance. Finally, we validate our strategy on a OST display, and we show that sub-millimetric registration accuracy can be achieved for working distances of ± 100 mm around the focal length of the magnifier.
... The manual task was designed in order not to require the accurate superimposition of the virtual and physical scenarios. The task consisted of connecting a sequence of numbered dots, drawing lines on a A4 paper positioned on a vertical physical support as shown in Figure 2. A custom Vuforia Image Target (an image with features that Vuforia SDK can detect and track) was used to anchor and display the virtual content at a fixed position in the space as in [2]. ...
... Enlarging optics are placed between the combiner and the display to focus the virtual 2D image so that it is perceived at a comfortable viewing distance on a semitransparent surface of projection. 13,26 Although OST HMDs are, to this day, at the leading edge of the AR research, mass adoption is still far to be reached since their actual use is hampered by technological and human-factor flaws. Just to list the most relevant: the perceptual conflicts between the view of the 3D real world on one hand and the stereoscopic virtual image on the other, the presence of a small augmentable field of view (FoV), the lowcontrast image they can offer, and the need for a robust and possibly automatic eye-to-display calibration to achieve an accurate and reliable virtual-to-real alignment. ...
An open discussion concerning some of the technical and human-factor flaws that still hinder the adoption of commercial OST head-mounted displays (e.g. Microsoft Hololens) in image-guided surgery.
Virtual Reality has been acquired in certain fields where the interaction is important such as the Architectural-Engineering-Construction field. This paper aimed to investigate the impact of using Virtual reality in Designing Process on improving the communication between Designers and Users. Using Qualitative research methodology, the findings agree that the major disadvantage is difficult software workflow for specific industries. It has been concluded that the increased interest in VR and immersive environments is allowing better understanding and identification of problems and is offering more accurate visualization and better project communication.
Full-text available
An introduction to Helmet Mounted Displays in military avionics applications.
Conference Paper
Full-text available
A Head Mounted Display (HMD) with wide field of view and high resolution is very desirable for applications ranging from flight simulation to telerobotics to immersive virtual reality. But this is not always possible because these two display attributes are linked by the focal length of the collimating optics. Using traditional optical techniques, an HMD can have wide field of view or high resolution, but not both simultaneously. This paper will review four different methods of increasing field of view while maintaining image resolution: 1) high resolution area of interest, 2) partial binocular overlap, 3) optical tiling, and 4) dichoptic area of interest, providing a description of each approach, citing some examples and relevant research and presenting advantages and disadvantages.
Full-text available
Quantification of perceptual sensitivity to latency in virtual environments (VEs) and elucidation of the mechanism by which latency is perceived is essential for development of countermeasures by VE designers. We test the hypothesis that observers use "image slip" @e., motion of the VE scene caused by system time lags) to detect the consequences of latency rather than explicitly detecting time delay. Our presumption is that forcing observers to change from constant rate to randomly paced head motion will disrupt their ability to discriminate latency based on perceived image slip. This study indicates that the disruption in motion pattern causes a shift in latency detection criteria and a minor degradation in discrimination ability. It is likely therefore that observers make at least some use of image slip in discriminating VE latency. It can also be inferred that when observers learn to discriminate latency, their Just Noticeable Difference (JND) remains below 17 ms.
Full-text available
The layout in most natural environments can be perceived through the use of nine or more sources of information. This number is greater than that available for the perception of any other property in any modality of perception. Oddly enough, how perceivers select and/or combine them has been relatively unstudied. This chapter focuses briefly on the issues inhibiting its study, and on what is known about integration, then in detail on an assessment of nine sources of information—occlusion, relative size, relative density, height in the visual field, aerial perspective, motion perspective, binocular disparities, convergence, and accommodation— and their relative utility at different distances. From a comparison of their ordinal depth-threshold functions, we postulate three different classes of distance around an observer--personal space, action space, and vista space. Within each space, we suggest a smaller number of sources act in consort, with different relative strengths, in offering the perceiver information about layout. We then apply this system to the study of representations of layout in art, to the development of the perception of layout by infants, and to an assessment of the scientific study of layout.
Augmented (AR) and Virtual Reality (VR) technologies are increasingly being used in manufacturing processes. These use real and simulated objects to create a simulated environment that can be used to enhance the design and manufacturing processes. Virtual Reality and Augmented Reality Applications in Manufacturing is written by experts from the world’s leading institutions working in virtual manufacturing and gives the state of the art of the field. Features: - Chapters covering the state of the art in VR and AR technology and how these technologies can be applied to manufacturing. - The latest findings in key areas of AR and VR application to manufacturing. - The results of recent cross-disciplinary research projects in the US and Europe showing application solutions of AR and VR technology in real industrial settings. Virtual Reality and Augmented Reality Applications in Manufacturing will be of interest to all engineers wishing to keep up-to-date with technologies that have the potential to revolutionize manufacturing processes over the next few years.
A reflective liquid crystal display (R-LCD) utilizes the ambient light for displaying images. In comparison to transmissive LCDs, reflective LCDs have advantages in lower power consumption, lighter weight, and better outdoor readability. A transflective LCD can display images in both transmissive mode (T-mode) and reflective mode (R-mode) simultaneously or independently. The applications of transflective LCD are mainly targeted to mobile display devices, such as cell phones, digital cameras, camcorders, personal digital assistants, pocket personal computers, and global position systems. The major scientific and technological challenges for a transflective LCD are: transflector design, inequality in optical efficiency, color, and response time between the T-mode and R-mode. This chapter introduces the basic operation principles of reflective LCDs and then transflectors and their underlying operating principles. Afterwards, it analyzes the factors affecting the image qualities. Finally, the chapter describes the major problems of the current transflective LCD technologies and discusses potential solutions.
Conference Paper
Augmented Reality (AR) is a fast rising technology and it has been applied in many fields such as gaming, learning, entertainment, medical, military, sports, etc. This paper reviews some of the academic studies of AR applications in manufacturing operations. Comparatively, it is lesser addressed due to stringent requirements of high accuracy, fast response and the desirable alignment with industrial standards and practices such that the users will not find drastic transition when adopting this new technology. This paper looks into common manufacturing activities such as product design, robotics, facilities layout planning, maintenance, CNC machining simulation and assembly planning. Some of the issues and future trends of AR technology are also addressed.
The fundamental idea behind the three-dimensional display is to present the user with a perspective image which changes as he moves. The retinal image of the real objects which we see is, after all, only two-dimensional. Thus if we can place suitable two-dimensional images on the observer's retinas, we can create the illusion that he is seeing a three-dimensional object. Although stereo presentation is important to the three-dimensional illusion, it is less important than the change that takes place in the image when the observer moves his head. The image presented by the three-dimensional display must change in exactly the way that the image of a real object would change for similar motions of the user's head. Psychologists have long known that moving perspective images appear strikingly three-dimensional even without stereo presentation; the three-dimensional display described in this paper depends heavily on this "kinetic depth effect."