Conference PaperPDF Available

Abstract and Figures

The perceptual and performance effects of viewing HMD sensor-offset video were investigated in a series of small studies and demonstrations. A sensor-offset simulator was developed with three sensor positions relative to left-eye viewing: inline and forward, temporal and level, and high and centered. Several manual tasks were used to test the effect of sensor offset: card sorting, blind pointing and open-eye pointing. An obstacle course task was also used, followed by a more careful look at avoiding specific obstacles. Once the arm and hand were within the sensor field of view, the user demonstrated the ability to readily move to the target regardless of the sensor offset. A model of sensor offset was developed to account for these results.
Content may be subject to copyright.
Sensor-offset HMD perception and performance
James E. Melzer
a
& Kirk Moffitt

b
a
Rockwell Collins Optronics, 2752 Loker Ave. West, Carlsbad, CA 92010-9731;
b
La Quinta CA
ABSTRACT
The perceptual and performance effects of viewing HMD sensor-offset video were investigated in a series of small
studies and demonstrations. A sensor-offset simulator was developed with three sensor positions relative to left-eye
viewing: inline and forward, temporal and level, and high and centered. Several manual tasks were used to test the effect
of sensor offset: card sorting, blind pointing and open-eye pointing. An obstacle course task was also used, followed by a
more careful look at avoiding specific obstacles. Once the arm and hand were within the sensor field of view, the user
demonstrated the ability to readily move to the target regardless of the sensor offset. A model of sensor offset was
developed to account for these results.
Keywords: Sensor offset, HMD, perception, performance
1. PROBLEM
The easiest approach to designing a helmet-mounted sensor/display system is to bolt displays and sensors onto the
helmet. For example, a sensor/display module can be suspended in front of the viewing eye in a manner typical of night
vision goggles (NVG) — placing the sensor outwards, but this approach results in a very forward center of gravity, with
the potential to cause neck strain for the soldier. A more ambitious design approach is to integrate displays and sensors
into the helmet so as to minimize bulk and protrusions and to optimize weight and balance in an attractive package. But
this integrated approach creates an offset of the sensor with respect to the wearer’s normal line of sight. An example of
the integrated approach is the Soldier Mobility and Rifle Targeting System (SMaRTS), shown in Figure 1, where the
sensor is above eye-level and is head-centered.
Figure 1 shows the Soldier Mobility and Rifle Targeting System (SMaRTS) system with the two
imaging sensors (visible on top, long wave infrared on the bottom) located high and in the
middle of the head with the digital HMD shown worn over the right eye.
Our current interest is in an integrated sensor/display system that is monocular, with a moderate field of view (FOV) and
unity magnification. A necessary trade-off with this design is the desire to mount the sensor package in a location that is
not directly in line with the user’s eye. Data are very limited on the perceptual and performance effects of sensor offset,
and there are no engineering guidelines.
James E. Melzer is Manager of Research and Technology: phone 1-760-438-9255: jemelzer@rockwellcollins.com

Kirk Moffitt is a human factors consultant: phone 1-760-360-0204: kirkmoffitt@earthlink.net
Head- and Helmet-Mounted Displays XII: Design and Applications
edited by Randall W. Brown, Colin E. Reese, Peter L. Marasco, Thomas H. Harding
Proc. of SPIE Vol. 6557, 65570G, (2007) · 0277-786X/07/$18 · doi: 10.1117/12.721156
Proc. of SPIE Vol. 6557 65570G-1
2. BACKGROUND
2.1 Displaced vision
A well-developed literature describes the effects of displaced vision on manual tasks at a near distance that can be
characterized as vision intensive
1, 2
. The typical methodology uses one or two prisms to displace the visual scene and
usually minimizes viewing the arms or hands. Pointing or reaching initially overshoots the target in the direction of the
displaced image. This is followed by gradual adaptation to the displacement, though perfect adaptation does not always
occur. Following removal of the prism displacement, a negative aftereffect is temporarily reported where pointing and
reaching errors are in the opposite direction.
2.2 Angular error
Reports of human-performance problems with HMD offset-sensor systems can sometimes be attributed to angular error
(i.e., sensor line of sight not in display line of sight). An informal test was conducted at Rockwell Collins Optronics
(RCO) in 2005 using a simulated SMaRTS system. Several of the relevant sensor/display offset conditions were
evaluated in terms of walking and reaching, as well as general perceptions of height and tilt. This simulator consisted of
an RCO monocular display and a forehead-mounted daylight camera. The horizontal FOV was approximately 32°. The
camera was slewed, tilted and rotated while the display remained in front of the right eye. Five subjects were tested, and
their behavior and observations were all in general agreement.
Initial testing with the camera and display both aligned straight ahead (i.e., boresighted to each other) resulted in
minimal problems. Participants were able to walk across the room, grasp objects, and move in straight lines. As
expected, movement was slower and slightly more hesitant than with naked-eye viewing, likely due to the field of view
restriction over normal viewing. With the camera slewed to the side by approximately 10°, each participant was asked to
walk across the room to grab a door handle, and then back to grab an object sitting on a bench. Walking was noticeably
slowed and hesitant, and started in the direction opposite the camera direction. Walking across the room from another
direction towards a doorway resulted in an arced path for all participants. The distance was approximately 20 feet, and
the arc was about 2 feet off-line at the halfway point. The endpoint was within the doorway.
Tilting the camera up approximately 10° simulated the effect of looking down at the display. After walking back and
forth across the room, participants said they felt “tall” and as if they were “on stilts.” Another observation was that the
room appeared to tip forwards, resulting in the impression of walking downhill.
Rotating the camera approximately 10° was disturbing and not easily corrected. Given the instruction to regain
gravitational upright by using head tilting, it was initially unclear whether to move the head left or right. Furthermore,
large head tilts never seemed to make the image upright. Some of this can be explained by the difference in center-of-
rotation of the HMD and head tilting. Head tilt rotates about the base of the neck, and describes a wide arc. To
complicate matters, most head tilting is accompanied by head rotation. To really complicate things, this head motion
stimulates the vestibular system and sense of balance, and stimulates both vertical and small rotational eye movements.
The results of the testing indicate that in the case of a digital sensor and imaging display, the two need to be boresighted
to reduce perceptual and locomotion effects. Although there were no specific tests conducted to evaluate a numerical
alignment tolerance, it is recommended that there be a maximum of 0.5° angular error between the two. Note that the
effects associated with an angular error between the sensor and display should not be confused with sensor-offset. The
remainder of this report assumes the two are boresighted with no angular error.
2.3 Early sensor-offset studies
A 1998 study of the effect of offset binocular cameras on eye-hand coordination used cameras positioned forward 165
mm and upward 62 mm with the image seen on a head-mounted display
3
. Measures of performance using a pegboard
task showed significant cost. There was adaptation over time, though performance never returned to the baseline level.
Negative aftereffects were also observed after removal of the apparatus. Only the one camera position was tested.
What about manual tasks that involve distances greater than arm’s length? One study measured the effects of several
stereo NVG configurations on grenade tossing performance to a target at a distance of 20-feet
4
. Compared to the control
condition where the NVG objectives were separated by the nominal distance of the eyes, a hyperstereo configuration
where the lateral NVG separation was twice the nominal eye separation significantly degraded tossing performance, and
Proc. of SPIE Vol. 6557 65570G-2
this was attributed to exaggerated stereo. When the NVG objectives were vertically displaced, but with the same lateral
separation as the eyes, no performance degradation was found. While the imaging apparatus studied by these researchers
was binocular, it provides some indication that a simple vertical offset does not affect a medium-distance task involving
distance judgment.
What about walking and driving with offset vision? Walking and driving involve the picking-up of flow-field
information in our visual periphery and the direction of waypoints rather than size and distance computations. To
approach an object, we make it expand in our field-of-view and the ground flow backward. The kinesthetic feedback
from our feet on the ground simplifies the acts of walking. This sensation of grounding precludes the need to directly
observe our feet. During these activities, we are generally looking forward. One researcher used himself as a subject in
extended testing of a displaced camera system similar to that used by Biocca and Rolland
4,5
. He wore the head-mounted
apparatus for several days, and found that walking around his building, up and down stairs, and through doorways was
not a problem.
2.4 FOV effects
What will have an effect on mobility is the limited FOV of the sensor/display system. FOVs of 12° and 40° have been
shown to result in significant errors in a navigation task, with some degradation present with a larger FOV of 90°
6
.
Performance degradation has also been reported on search and maze tasks with FOVs of 48° up to 112°
7
. Peripheral
vision is important to self movement not because of retinal organization, but because that is where the highest rate of
optic flow occurs. If vision is limited to the sensor/display FOV, an increase in head movement and slower movement is
expected.
3. SIMULATOR
The available data provide little design guidance for the location of offset-sensors on an HMD system. We devised a
plan to use helmet-mounted cameras and a head-mounted display to test the effects of sensor offset. Manual coordination
and mobility tasks were used in testing with small numbers of subjects.
3.1 Sensor-offset apparatus
A simulator was constructed using
an eMagin HMD and three
miniature monochrome daylight
cameras. Night-vision sensors
were not used due to cost, weight
and complexity. The sensor-offset
simulator is diagrammed in Figure
2. The three basic components are
the helmet and cameras, eMagin
binocular HMD, and the
backpack-mounted video
switcher, video interface, laptop
PC and eMagin controller. The
Watec cameras are 537x597 pixel,
>380 line monochrome systems
with a 6 mm fl lens and an
approximate FOV of 32°
horizontal.
Figure 2 shows a block diagram
of sensor-offset simulator.
Front Watec Camera
eMagin HMD
9 v battery
MultiVideo
Switching
Unit
AverMedia
PMCIA video
interface
Dell Laptop
P
C
eMagin
controller
Helmet
Mounted
Backpack
M
ounted
Rotary switch
Head
Mounted
Side Watec Camera
Top Watec Camera
Proc. of SPIE Vol. 6557 65570G-3
The eMagin HMD is a 800x600 pixel SVGA binocular display with a horizontal FOV of approximately 32°. Viewing
was left-eye-only, with the right eyepiece covered. The helmet was a large-size bicycle helmet with a camera platform
on the front-left and counter-balancing weights on the back right. The weight of the helmet system was approximately
one kilogram. The helmet assembly and HMD are shown in Figure 3, and the backpack is added in Figure 4. The camera
offsets are described in Table 1. All outside vision was shielded with a black drape attached to the helmet with Velcro
and tied around the waist. The compete package—HMD, helmet and cameras, black drape and backpack with electronics
is shown in Figure 5.
Table 1. Camera positions relative to left eye
Offset Lateral Vertical Longitudinal
Forward 0 0 12 cm forward
High & Centered 3 cm nasal 15 cm high 7 cm forward
Side 12 cm temporal 0 0
Figure 3. Sensor-offset simulator ensemble that includes a helmet, three cameras, mounting platform, power-switching
control, video connectors, counterweights and eMagin HMD. Note the locations of the three cameras (circled)
Fig. 4.Sensor-offset simulator ensemble with backpack used
in obstacle course and avoidance studies. The backpack held
a laptop computer, video switch unit, and the eMagin
control unit.
Proc. of SPIE Vol. 6557 65570G-4
Fig. 5.Sensor-offset simulator ensemble with black drape to limit visibility to 32° x 2camera video.
4. INVESTIGATIONS
4.1 General procedure and observations
Subjects were affiliated with RCO, and were verified to have at least 20/30 left-eye acuity using a vision wall-chart at a
distance of 10 meters. Eye dominance was assessed for the initial study, and each subject had an unambiguous dominant
eye. No eye dominance perceptual or performance effects were noted. The cameras were prefocused for near or far
depending on the task. Each camera was boresighted to the HMD for all tasks.
For each task, subjects were first measured without the simulator to establish a naked-eye baseline. Within each task, the
order of sensor-offset was randomly selected for each subject.
Relative to the baseline naked-eye vision, all camera conditions with a FOV of 32° x 24° slowed down movement. The
most noticeable perception was a downward slant of the floor and a minification of distant objects with the top-mounted
camera.
4.2 Manual tasks
The first test was card sorting. This was a simple task where cards from one suit were laid out at the clock positions on a
piece of felt on a table. The subject, using one hand, simply made a pile in the center starting with the “2” and ending
with the “King.” The time for this task was recorded for three repetitions, after which each of the three subjects was
asked to rate the effort required for that task on a scale of 1 (no effort) to 10 (extreme effort). A photo of this task is
shown in Figure 6.
Median times for the card-sorting task are shown in Table 2 for each of the three subjects. The difference between the
baseline and sensor times reflects the cost of limiting the FOV plus offsetting vision. The times for the front and forward
sensor are less than for the side and top sensors for all three subjects. Workload estimates were inconsistent and did not
correspond to response times.
Proc. of SPIE Vol. 6557 65570G-5
Fig. 6. Card-sorting task. Fig.7. This photo represents both the blind- and open-eye
pointing tasks.
Table 2. Median card-sorting times (seconds).
Subject Baseline Front Side Top
S1 9 25 36 32
S2 9 20 25 21
S3 11 27 30 31
The combination of the small differences in Table 2 combined with the inconclusive workload data led us to develop
another study of manual performance and sensor offset. We decided to separate the perceptual and performance aspects
of a manual task. A pointing task was developed where subjects stood on a line 120 cm from the wall and step forward
and point at an “X” target” at a height of 150 cm. For the first part of this pointing study, subjects were instructed to look
at the target, close their eyes, and step forward and place their index finger on the “X” target. The experimenter promptly
noted the finger position on the sheet of paper. This “blind” pointing task represents the perceptual component of where
the target appears, with no visual guidance of their hand and finger. The pointing task is shown in Figure 7.
Three trials were run for each sensor condition, and the centroid of the resulting triangle of points used as the summary
statistic. Figure 8 shows the results for this study. These results correspond to the prism displacement studies, where the
apparent target position is opposite to the sensor position. Specifically, the top sensor results in the perceived lowest
target, and the (left) side target results in the target appearing to the right. The control of baseline condition with naked-
eye vision always results in the most accurate performance.
We next asked subjects to point to the same “X” target with their eyes open. Each trial started with the experimenter
removing a card that hid the target “X”. Since the end result was the index finger pointing at the target, we took video
recordings of each trial and noted the time from a go signal to finger-on-target plus any apparent strategies. The data for
the “eyes open” pointing task for a representative subject are shown in Figure 9. The control or baseline condition
showed the quickest response. We expected that response would improve and level-off with trials. This did not generally
occur, and may be due to the tendency to stab at the target sheet and then drag the index finger to the target in the first
few trials, but to then start guiding the finger to the target—the net result being little difference in pointing time over the
nine trials. The hand and finger started each trial outside the 32° x 24° FOV. Based on video evidence, we speculate that
the two strategies were to stab with the finger into the FOV and then drag it onto the target, or to move the finger into the
FOV and then visually guide it to the target—with both taking about the same amount of time. No sensor position effect
can be discerned from the data from the four subjects.
Proc. of SPIE Vol. 6557 65570G-6
Fig. 8.Pointing performance for the blind pointing task for four subjects (TP, TO, KM and JM).
Fig. 9.Representative pointing data for the open-eye pointing task for one subject (TO).
TP
Front
Control
Side
Top
-20
-10
10
20
-20
-10
10
20
TO
Front
Control
Side
Top
-20
-10
10
20
-20
-10
10
20
KM
Top
Side
Control
Front
-20
-10
10
20
-20
-10
10
20
JM
Front
Control
Side
Top
-20
-10
10
20
-20
-10
10
20
TO
1
2
Control condition
1
2
3
4
5
6
7
8
9
Trial
Front
Side
Top
Proc. of SPIE Vol. 6557 65570G-7
11
__
I..-
4.3 Mobile tasks
We first asked subjects to describe their perceptions of the experimental room in terms of the floor slanting, objects
looking distorted. A common response was that the floor looked slanted downwards with objects at 5 to 10 meters
looking small with the top-mounted camera. No consistent perceptual effects were noted with the front or side mounts.
We tested the effects of sensor offset on mobility by constructing a simple obstacle course. Subjects briskly walked
through a course defined by cardboard boxes—stepping over two one-foot high and deep boxes and ducking under five-
foot entryways. Subjects also had to avoid several tall boxes on the left and right, and execute a hairpin left-hand turn.
The entire course was approximately 50 feet in length. Subjects were instructed to move briskly but not to purposely
knock over boxes. Figure 10 shows two views of the course.
Fig. 10. Obstacle course constructed of stacked boxes in a u-shaped 50-foot course.
Completion times for the obstacle-course task were 10 seconds for the baseline naked-eye condition, and between 18 and
42 seconds for the three sensor offsets. No sensor-offset trends were evident. Similarly, workload ratings also showed no
evident trends. Subjects hit a number of boxes in stepping over, going around and ducking under obstacles. We think
that subjects felt with their arms and hands and readily kicked the boxes to make their way through the course. We
decided to follow-up with a closer look at components of this task.
We recorded video of two subjects stepping over a one-foot high and wide box, then circling around and passing close
by six-foot high stacked boxes on the left, and then circling around and walking towards these boxes and stopping at a
distance of one-foot (chest to box). This sequence was repeated for each camera offset. Subjects did not reach out to
touch any boxes. Representative video frames are shown in Figure 11.
The results of this demonstration are shown in Table 4. As with the other studies in this investigation, the naked-eye
control condition was associated with superior performance. The cost of the 32° x 24° FOV HMD view was misjudging
distances and sometimes running into obstacles. Arms and hands were not used to reach out and feel the obstacles. Both
subjects maintained a relatively large clearance in passing-by an obstacle on the left with the left-side-mounted camera.
Similarly, the approach distance was overestimated with the front-mounted camera. Both of these findings correspond to
camera offset, and demonstrate that effects linked to specific sensor-offset effects are more likely with isolated and
simple tasks than with complex tasks.
Proc. of SPIE Vol. 6557 65570G-8
Figure 11. Video frame sequences (1/10 second between frames) of stepping over, passing by and approaching tasks.
Proc. of SPIE Vol. 6557 65570G-9
Table 4. Stepping over, passing by, and approaching obstacles
Sensor Offset
Subject
Task Control Forward Side Top
Step-over 10 cm Hit Hit 30
Pass-by 10 20 40 30
KM
Approach
* +1 +60 +40 +40
Step-over 20 30 30 20
Pass-by 5 Hit 40 Hit
JM
Approach* -11 +29 +20 +29
* Relative to instruction to stop at distance of one-foot.
5. SUMMARY AND MODEL
Task performance was degraded and workload estimates were higher for all sensor positions for manual and mobile
tasks relative to naked-eye vision. The likely cause of this global effect is the limited vision from the 32° x 24° HMD
FOV. If a sensor/display system has an angular error, performance will be dramatically affected. The current study only
used straight-ahead and aligned sensors.
The current study only used left-eye monocular imagery. The subjects presented a mix of left and right eye dominance,
and left- and right-handedness. There were no comments or concerns about not seeing with both eyes, or even which eye
was used for viewing.
In agreement with the prism displacement literature, pointing without real-time visual feedback is opposite to the sensor-
offset position. Once the arm and hand are visible within the sensor FOV, the user can readily reach a target position—
regardless of sensor-offset position. The user can either stab at the apparent target location and then drag their hand to
the target, or make a reaching motion and then guide their hand to the target. The distinction between the hand not being
visible (obscured or outside the sensor FOV) or visible (within the sensor FOV) is critical to understanding reaching and
pointing performance.
The current study was deficient in only testing a small number of subjects for a limited number of trials. The apparatus
had no look-around vision, which required total visual reliance on sensor imagery. The tasks imposed minimal stress on
the subjects, unlike many tasks that would be encountered in the real world. Table 5 presents a simple model of a sensor-
offset helmet display system as it relates to perception and performance.
Table 5. Sensor offset model
System configuration Perception and performance Evidence
Misaligned
sensor/display
Walking in curved path to waypoint, distracting slant
and rotation effects
Informal testing at RCO
Aligned sensor
Blind pointing
Hands and feet visible
Noticeable slant with high sensor; misjudge
closeness of right-side objects when walking;
reaching opposite to sensor position with no visual
guidance
Blind pointing opposite of sensor positions
No large sensor offset performance or effort
differences reported for both manual and mobile
tasks
Current study, literature
Current study, large
literature on prism
displacement
Current study
Left-eye display No evidence of eye awareness or dominant eye
effects. Monocular versus binocular viewing not
investigated.
Current study, literature
8
Moderate 32° FOV Large decrements in performance and increases in
reported effort relative to naked-eye vision
Current study, literature
Proc. of SPIE Vol. 6557 65570G-10
REFERENCES
1. J. Petz, M. Hayhoe, and R. Loeber. “The coordination of eye, head, and hand movements in a natural task,” Journal
of Experimental Brain Research, 139, 266-277 (2001).
2. R. B. Welch. “Adaptation of space perception, “ In K. R. Boff, L. Kaufmanf and J. P. Thomas (eds.), Handbook of
perception and human performance, Volume I. New York: Wiley (1986).
3. F. A. Biocca and J. P. Rolland. “Virtual eyes can rearrange your body: Adaptation to visual displacement in see-
through head-mounted displays,” Presence, 7, 262-277 (1998).
4. V. G. CuQlock-Knopp, K. P. Myles, F J. Malkin, and E. Bender. The effects of viewpoint offsets of night vision
goggles on human performance in a simulated grenade throwing task. ARL-TR-2401. Aberdeen Proving Ground MD:
Army Research Laboratory (2001).
5. S. Mann. Fundamental issues in mediated reality, WearComp, and camera-based augmented reality”. In W.
Barfield & T. Caudwell (eds.), Fundamentals of wearable computers and augmented reality. Mahwah NJ: Erlbaum
(2001).
6. P. L. Alfano and G. F. Michel. “Restricting the field of view: perceptual and performance effects,” Perceptual and
Motor Skills, 70(1), 35-45 (1990).
7. K. W. Arthur. Effect of field of view on performance with head-mounted displays. Dissertation thesis, University of
North Carolina (2000).
8. A. P. Mapp, H. Ono, and R. Barbeito. “What does the dominant eye dominate? A brief and somewhat contentious
review,” Perception & Psychophysics, 65, 310-317 (2003).
Proc. of SPIE Vol. 6557 65570G-11
... Unfortunately, this can create an offset of the sensor aperture with respect to the wearer's normal line-of-sight. Melzer and Moffitt (2007) investigated the perceptual and performance effects of viewing offset (forward, high and centered and side) monocular sensor video, replicating potential integrated design solutions for dismounted Warfighter applications. The results indicated little or no eye dominance issues but demonstrated that the sensor and display must be aligned to within 0.5°. ...
... If the alignment error was in the horizontal plane, subjects walked in an arc, rather than in a straight line. Melzer and Moffitt (2007) also found that when aligned, the high-mounted sensor gave an indication of a slated floor, and the side mounted sensor produced a blind pointing error that was opposite of the sensor location. However, as long as the test subjects were able to view their feet and hands, they were able re-calibrate their hand-eye coordination to perform close-in tasks, albeit with some temporary after effects (Bertelson and de Gelder, 2004). ...
... Unfortunately, this can create an offset of the sensor aperture with respect to the wearer's normal line-of-sight. Melzer and Moffitt (2007) investigated the perceptual and performance effects of viewing offset (forward, high and centered and side) monocular sensor video, replicating potential integrated design solutions for dismounted Warfighter applications. The results indicated little or no eye dominance issues but demonstrated that the sensor and display must be aligned to within 0.5°. ...
... If the alignment error was in the horizontal plane, subjects walked in an arc, rather than in a straight line. Melzer and Moffitt (2007) also found that when aligned, the high-mounted sensor gave an indication of a slated floor, and the side mounted sensor produced a blind pointing error that was opposite of the sensor location. However, as long as the test subjects were able to view their feet and hands, they were able re-calibrate their hand-eye coordination to perform close-in tasks, albeit with some temporary after effects (Bertelson and de Gelder, 2004). ...
Full-text available
Chapter
Helmet-mounted displays (HMDs) have been in development since the 1960s. Now, almost five decades later, the technology has improved significantly; HMDs have made some inroads into commercial applications (Ellis, 1995; Kalawsky, 1993; Pankratov, 1995); their use has become standard within the military community for flight applications, training and simulation (Simons and Melzer, 2003); and they are rapidly expanding into military applications for the dismounted and vehicular-mounted Warfighter (see Chapter 3, Introduction to Helmet-Mounted Displays). Unfortunately, design guidance for HMDs has not kept pace. This is due in part to the rapid advances in enabling technologies (e.g., micro-electromechanical devices, microprocessors, emissive image sources, microdisplays). 1 However, it is mostly because HMDs are both engineering-and human-centered systems; whenever humans are a key system component, their complex sensory, neural mechanisms and their variability across the population makes the design of HMDs and the human-machine interface extremely challenging. This, in turn, makes universal design guidelines equally challenging. This is not to imply that the design community has been negligent in the development of guidelines. In a 1972 symposium on visually-coupled systems sponsored by the U.S. Air Force System Command's Aerospace Medical Division held at Brooks Air Force Base, Texas, participates attempted to address many of the fundamental design issues for HMDs (Birt and Task, 1973). Hughes, Chason and Schwank (1973) provided an overview of the history and the known and potential psychological problems of HMDs and included an extensive annotated bibliography of relevant material on such issues as eye dominance, brightness disparity, helmet-mounted displays/helmet-mounted sights, retinal rivalry, and others identified during the 1972 symposium. Chisum (1975) expanded this discussion by presenting visual considerations associated with the head-coupled aspects of HMDs. As a special subset of displays, HMDs are subject to the practices for display development in general, many of which are based on decades of human performance research. Two of the most comprehensive volumes are Farrell and Booth's (1984) Design handbook for Imagery Interpretation Equipment and Boff and Lincoln's (1988) Engineering Data Compendium: Human Perception and Performance. HMDs are also a specialized class of displays called head-up displays (HUDs), defined as transparent, fixed location displays that present data without obstructing the user's view (Figure 17-1). Developed originally as gun sights for military aircraft, they have expanded into commercial aircraft (Steenblik, 1989) and recently have become an option in some automobiles (Oldsmobile Club of America, 2006). HUD guidelines concentrate mostly on symbology and related display criteria such as clutter, dynamic response and viewing comfort issues, and many of these criteria have a firm foundation in human factors and human perception (Prinzel and Risser, 2004; Ververs and Wickens, 1998; Weintraub, 1992; Wickens, 1997; Wickens, Fadden, Merwin, and Ververs, 1998). Two important reference books on HUDs are Wood and Howells' (2001) Head-Up Displays and Newman's (1995) Head-Up Displays, Designing the Way Ahead. However, of the vast amount of research conducted over the last half-century, only four reference books have been written specifically for HMDs; and the first three of these focus on aviation applications only. The first 1 Suggested reading on these enabling technologies is Brennesholtz: Designing for the User (Melzer and Moffitt, 1997), addresses HMD development for fixed-wing aircraft. It could be considered an engineering guide with its coverage of the traditional engineering design approach, but it also places a significant emphasis on the end user, addressing a wide array of human-centered disciplines required for the design of head-mounted virtual reality, industrial and military displays. Topics include optical requirements, lens designs, cybersickness, eye strain, head-supported weight, 2 stereoscopic imagery, anthropometry, and user acceptance. The book also introduces the potential of HMDs to serve as an interface for brain-actuated control functions, a concept explored in this volume (see Chapter 19, The Potential of an Interactive HMD). Figure 17-1. Examples of head-up displays (HUDs): (left) a HUD in a fighter cockpit and (right) a HUD designed for aircraft simulation (Rockwell Collins).
... An important finding from sensor offset studies was that misalignment of the HMD and sensors could lead to spatial disorientation and possibly compromised navigation (Melzer and Moffitt, 2007), so care was taken to ensure that the helmet mounted display mount was aligned with the sensors to better than 0.5°, thus eliminating any visible mismatch when simultaneously looking at objects through the system and with the unaided eye. To maintain alignment, it was necessary to provide a mechanism that allowed 3axis translational adjustments, but which prevented any rotational movement. ...
Full-text available
Conference Paper
The development of an advanced ground soldier's integrated headgear system for the Army's Future Force Warrior Program passed a major milestone during 2006. Field testing of functional headgear systems by small combat units demonstrated that the headgear capabilities were mature enough to move beyond the advanced technology demonstration (ATD) phase. This paper will describe the final system with test results from the three field exercises and will address the strengths and weaknesses of the headgear system features, head mounted sensors, displays and sensor fusion.
... For infinity-distance objects this alignment is achievable. For close objects the camera offset will create parallax between the object and image (e.g., Melzer & Moffitt, 2007). A high-mounted camera will create the impression of being too tall and the forward scene slanting downward. ...
Full-text available
Article
A scheme for organizing head-mounted displays based primarily on image configuration and secondarily on the display/optics and head-mounted sensor(s) is offered as a way to clarify design and human interface issues. These attributes are linked to field-of-view, image content and device applications. An argument is made for discarding confusing and outdated concepts. This organization of a diverse set of display devices is one step toward an eventual handbook of head-mounted displays.
Article
Providing both I2 (image intensified) and FLIR (forward looking infrared) images on a helmet-mounted display (HMD) requires perceptual design tradeoffs. Primary considerations center on the number, type, and placement of sensors. Perceptual drivers for these tradeoffs are derived from monocular versus biocular/binocular displays and offset of the sensors from the design eye. These conditions can create binocular rivalry, perceptual perspective distortion or hyperstereopsis, a binocular perceptual distortion that occurs when the sensors are positioned further apart than the interpupillary distance (IPD). Each of these perceptual tradeoff considerations is discussed.
Full-text available
Article
Among the most critical issues in the design of immersive virtual environments are those that deal with the problem of technologically induced intersensory conèict and one of the results, sensorimotor adaptation. An experiment was conducted to sup- port the design of a prototype see-through, head-mounted display (HMD). When wearing video see-through HMDs in augmented reality systems, subjects see the world around them through a pair of head-mounted video cameras. The study looked at the effects of sensory rearrangement caused by a HMD design that displaced the user's ''virtual'' eye position forward (165 mm) and above (62 mm) toward the spatial position of the cameras. The position of the cameras creates images of the world that are slightly downward and inward from normal. Measures of hand-eye coordination and speed on a manual pegboard task revealed substantial perceptual costs of the eye displacement initially, but also evidence of adaptation. Upon érst wearing the video see-through HMD, subjects' pointing errors increased signiécantly along the spatial dimensions displaced (the y dimension, above-below the target, and z dimension, in front-behind the target). Speed of performance on the pegboard task decreased by 43% compared to baseline performance. Pointing accuracy improved by approxi- mately 33% as subjects adapted to the sensory rearrangement, but it did not reach baseline performance. When subjects removed the see-through HMD, there was evidence that their hand-eye coordination had been altered. Negative aftereffects were observed in the form of greater errors in pointing accuracy compared to base- line. Although these aftereffects are temporary, the results may have serious practical implications for the use of video see-through HMDs by users (e.g., surgeons) who depend on very accurate hand-eye coordination.
Full-text available
Article
Visual perception involves both the high acuity of foveal vision and the wide scope of overlapping peripheral information. The role of peripheral vision in competent performance of the adult visuomotor activities of walking, reaching, and forming a cognitive map of a room was examined using goggles which limited the scope of the normal field of view to 9 degrees, 14 degrees, 22 degrees, or 60 degrees. Each restriction of peripheral field information resulted in some perceptual and performance decrements, with the 9 degrees and 14 degrees restriction producing the most disturbance. In addition, bodily discomfort, dizziness, unsteadiness and disorientation, were reported as the subjects moved around with restricted fields of view.
Full-text available
Article
Relatively little is known about movements of the eyes, head, and hands in natural tasks. Normal behavior requires spatial and temporal coordination of the movements in more complex circumstances than are typically studied, and usually provides the opportunity for motor planning. Previous studies of natural tasks have indicated that the parameters of eye and head movements are set by global task constraints. In this experiment, we explore the temporal coordination of eye, head, and hand movements while subjects performed a simple block-copying task. The task involved fixations to gather information about the pattern, as well as visually guided hand movements to pick up and place blocks. Subjects used rhythmic patterns of eye, head, and hand movements in a fixed temporal sequence or coordinative structure. However, the pattern varied according to the immediate task context. Coordination was maintained by delaying the hand movements until the eye was available for guiding the movement. This suggests that observers maintain coordination by setting up a temporary, task-specific synergy between the eye and hand. Head movements displayed considerable flexibility and frequently diverged from the gaze change, appearing instead to be linked to the hand trajectories. This indicates that the coordination of eye and head in gaze changes is usually the consequence of a synergistic linkage rather than an obligatory one. These temporary synergies simplify the coordination problem by reducing the number of control variables, and consequently the attentional demands, necessary for the task.
Full-text available
Article
We examine a set of implicit and explicit claims about the concept of eye dominance that have been made over the years and note that the new literature on eye dominance does not reflect the old literature from the first half of the last century. We argue that the visual and oculomotor function of the dominant eye--defined by such criteria as asymmetry in acuity, rivalry, or sighting--remains unknown and that the usefulness of the concept for understanding its function is yet to be determined. We suggest that the sighting-dominant eye is the eye used for monocular tasks and has no unique functional role in vision.
Article
This study was conducted to determine whether night vision goggles (NVGs) with hyperstereo viewpoint offsets produced a significant difference in the magnitude and direction of throwing errors compared to NVGs without hyperstereo viewpoint offsets. A second reason for the study was to disambiguate the visual motor performance effects of an NVG design with mixed vertical and horizontal viewpoint offsets. Each of 32 National Guardsmen threw simulated grenades onto a trap-door target, a task that was modeled after a "door- kicking" military operation. Each time the participant threw a grenade, the radial direction and distance of the grenade's landing position were recorded. The results of the study indicated that wearing NVGs with hyperstereo viewpoint offsets resulted in a statistically significant increase in the magnitude and direction of errors in throwing compared to non-hyperstereo viewpoint offsets. Results also indicated that the horizontal component of a mixed horizontal and vertical offset NVG design accounts for most of the errors in performance. The results suggest that soldiers will need to practice throwing grenades while wearing NVGs with viewpoint offsets in order to approach the same accuracy level as with non-offset NVGs.
Article
Kevin Wayne Arthur Effects of Field of View on Performance with Head-Mounted Displays (Under the direction of Frederick P. Brooks, Jr.) The field of view (FOV) in most head-mounted displays (HMDs) is no more than 60 degrees wide -- far narrower than our normal FOV of about 200 wide. This mismatch arises mostly from the difficulty and expense of building wide-FOV HMDs. Restricting a person's FOV, however, has been shown in real environments to affect people's behavior and degrade task performance. Previous work in virtual reality too has shown that restricting FOV to 50 or less in an HMD can degrade performance. I conducted experiments with a custom, wide-FOV HMD and found that performance is degraded even at the relatively high FOV of 112, and further at 48. The experiments used a prototype tiled wide-FOV HMD to measure performance in VR at up to 176 total horizontal FOV, and a custom large-area tracking system to establish new findings on performance while walking about a large virtua...
Fundamental issues in mediated reality, WearComp, and camera-based augmented reality
  • S Mann
S. Mann. "Fundamental issues in mediated reality, WearComp, and camera-based augmented reality". In W.
The coordination of eye, head, and hand movements in a natural task
  • J Petz
  • M Hayhoe
  • R Loeber
J. Petz, M. Hayhoe, and R. Loeber. "The coordination of eye, head, and hand movements in a natural task," Journal of Experimental Brain Research, 139, 266-277 (2001).