Article

Generating accurate 3D gaze vectors using synchronized eye tracking and motion capture

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Assessing gaze behavior during real-world tasks is difficult; dynamic bodies moving through dynamic worlds make gaze analysis difficult. Current approaches involve laborious coding of pupil positions. In settings where motion capture and mobile eye tracking are used concurrently in naturalistic tasks, it is critical that data collection be simple, efficient, and systematic. One solution is to combine eye tracking with motion capture to generate 3D gaze vectors. When combined with tracked or known object locations, 3D gaze vector generation can be automated. Here we use combined eye and motion capture and explore how linear regression models generate accurate 3D gaze vectors. We compare spatial accuracy of models derived from four short calibration routines across three pupil data inputs: the efficacy of calibration routines was assessed, a validation task requiring short fixations on task-relevant locations, and a naturalistic object interaction task to bridge the gap between laboratory and “in the wild” studies. Further, we generated and compared models using spherical and Cartesian coordinate systems and monocular (left or right) or binocular data. All calibration routines performed similarly, with the best performance (i.e., sub-centimeter errors) coming from the naturalistic task trials when the participant is looking at an object in front of them. We found that spherical coordinate systems generate the most accurate gaze vectors with no differences in accuracy when using monocular or binocular data. Overall, we recommend 1-min calibration routines using binocular pupil data combined with a spherical world coordinate system to produce the highest-quality gaze vectors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Background Successful hand-object interactions require precise hand-eye coordination with continual movement adjustments. Quantitative measurement of this visuomotor behaviour could provide valuable insight into upper limb impairments. The Gaze and Movement Assessment (GaMA) was developed to provide protocols for simultaneous motion capture and eye tracking during the administration of two functional tasks, along with data analysis methods to generate standard measures of visuomotor behaviour. The objective of this study was to investigate the reproducibility of the GaMA protocol across two independent groups of non-disabled participants, with different raters using different motion capture and eye tracking technology. Methods Twenty non-disabled adults performed the Pasta Box Task and the Cup Transfer Task. Upper body and eye movements were recorded using motion capture and eye tracking, respectively. Measures of hand movement, angular joint kinematics, and eye gaze were compared to those from a different sample of twenty non-disabled adults who had previously performed the same protocol with different technology, rater and site. Results Participants took longer to perform the tasks versus those from the earlier study, although the relative time of each movement phase was similar. Measures that were dissimilar between the groups included hand distances travelled, hand trajectories, number of movement units, eye latencies, and peak angular velocities. Similarities included all hand velocity and grip aperture measures, eye fixations, and most peak joint angle and range of motion measures. Discussion The reproducibility of GaMA was confirmed by this study, despite a few differences introduced by learning effects, task demonstration variation, and limitations of the kinematic model. GaMA accurately quantifies the typical behaviours of a non-disabled population, producing precise quantitative measures of hand function, trunk and angular joint kinematics, and associated visuomotor behaviour. This work advances the consideration for use of GaMA in populations with upper limb sensorimotor impairment.
Article
Full-text available
Importance New treatments for upper-limb amputation aim to improve movement quality and reduce visual attention to the prosthesis. However, evaluation is limited by a lack of understanding of the essential features of human-prosthesis behavior and by an absence of consistent task protocols. Objective To evaluate whether task selection is a factor in visuomotor adaptations by prosthesis users to accomplish 2 tasks easily performed by individuals with normal arm function. Design, Setting, and Participants This cross-sectional study was conducted in a single research center at the University of Alberta, Edmonton, Alberta, Canada. Upper-extremity prosthesis users were recruited from January 1, 2016, through December 31, 2016, and individuals with normal arm function were recruited from October 1, 2015, through November 30, 2015. Eight prosthesis users and 16 participants with normal arm function were asked to perform 2 goal-directed tasks with synchronized motion capture and eye tracking. Data analysis was performed from December 3, 2018, to April 15, 2019. Main Outcome and Measures Movement time, eye fixation, and range of motion of the upper body during 2 object transfer tasks (cup and box) were the main outcomes. Results A convenience sample comprised 8 male prosthesis users with acquired amputation (mean [range] age, 45 [30-64] years), along with 16 participants with normal arm function (8 [50%] of whom were men; mean [range] age, 26 [18-43] years; mean [range] height, 172.3 [158.0-186.0] cm; all right handed). Prosthesis users spent a disproportionately prolonged mean (SD) time in grasp and release phases when handling the cups (grasp: 2.0 [2.3] seconds vs 0.9 [0.8] seconds; P < .001; release: 1.1 [0.6] seconds vs 0.7 [0.4] seconds; P < .001). Prosthesis users also had increased mean (SD) visual fixations on the hand for the cup compared with the box task during reach (10.2% [12.1%] vs 2.2% [2.8%]) and transport (37.1% [9.7%] vs 22.3% [7.6%]). Fixations on the hand for both tasks were significantly greater for prosthesis users compared with normative values. Prosthesis users had significantly more trunk flexion and extension for the box task compared with the cup task (mean [SD] trunk range of motion, 32.1 [10.7] degrees vs 21.2 [3.7] degrees; P = .01), with all trunk motions greater than normative values. The box task required greater shoulder movements compared with the cup task for prosthesis users (mean [SD] flexion and extension; 51.3 [12.6] degrees vs 41.0 [9.4] degrees, P = .01; abduction and adduction: 40.5 [7.2] degrees vs 32.3 [5.1] degrees, P = .02; rotation: 50.6 [15.7] degrees vs 35.5 [10.0] degrees, P = .02). However, other than shoulder abduction and adduction for the box task, these values were less than those seen for participants with normal arm function. Conclusions and Relevance This study suggests that prosthesis users have an inherently different way of adapting to varying task demands, therefore suggesting that task selection is crucial in evaluating visuomotor performance. The cup task required greater compensatory visual fixations and prolonged grasp and release movements, and the box task required specific kinematic compensatory strategies as well as increased visual fixation. This is the first study to date to examine visuomotor differences in prosthesis users across varying task demands, and the findings appear to highlight the advantages of quantitative assessment in understanding human-prosthesis interaction.
Article
Full-text available
Quantifying behavior is crucial for many applications in neuroscience. Videography provides easy methods for the observation and recording of animal behavior in diverse settings, yet extracting particular aspects of a behavior for further analysis can be highly time consuming. In motor control studies, humans or other animals are often marked with reflective markers to assist with computer-based tracking, but markers are intrusive, and the number and location of the markers must be determined a priori. Here we present an efficient method for markerless pose estimation based on transfer learning with deep neural networks that achieves excellent results with minimal training data. We demonstrate the versatility of this framework by tracking various body parts in multiple species across a broad collection of behaviors. Remarkably, even when only a small number of frames are labeled (~200), the algorithm achieves excellent tracking performance on test frames that is comparable to human accuracy.
Article
Full-text available
This study explores the role that vision plays in sequential object interactions. We used a head-mounted eye tracker and upper-limb motion capture to quantify visual behavior while participants performed two standardized functional tasks. By simultaneously recording eye and motion tracking, we precisely segmented participants' visual data using the movement data, yielding a consistent and highly functionally resolved data set of real-world object-interaction tasks. Our results show that participants spend nearly the full duration of a trial fixating on objects relevant to the task, little time fixating on their own hand when reaching toward an object, and slightly more time-although still very little-fixating on the object in their hand when transporting it. A consistent spatial and temporal pattern of fixations was found across participants. In brief, participants fixate an object to be picked up at least half a second before their hand arrives at the object and stay fixated on the object until they begin to transport it, at which point they shift their fixation directly to the drop-off location of the object, where they stay fixated until the object is successfully released. This pattern provides additional evidence of a common system for the integration of vision and object interaction in humans, and is consistent with theoretical frameworks hypothesizing the distribution of attention to future action targets as part of eye and hand-movement preparation. Our results thus aid the understanding of visual attention allocation during planning of object interactions both inside and outside the field of view.
Article
Full-text available
Background Dexterous hand function is crucial for completing activities of daily living (ADLs), which typically require precise hand-object interactions. Kinematic analyses of hand trajectory, hand velocity, and grip aperture provide valuable mechanistic insights into task performance, but there is a need for standardized tasks representative of ADLs that are amenable to motion capture and show consistent performance in non-disabled individuals. Our objective was to develop two standardized functional upper limb tasks and to quantitatively characterize the kinematics of normative hand movement. Methods Twenty non-disabled participants were recruited to perform two tasks: the Pasta Box Task and Cup Transfer Task. A 12-camera motion capture system was used to collect kinematic data from which hand movement and grip aperture measures were calculated. Measures reported for reach-grasp and transport-release segments were hand distance travelled, hand trajectory variability, movement time, peak and percent-to-peak hand velocity, number of movement units, peak and percent-to-peak grip aperture, and percent-to-peak hand deceleration. A between-session repeatability analysis was conducted on 10 participants. Results Movement times were longer for transport-release compared to reach-grasp for every movement. Hand and grip aperture measures had low variability, with 55 out of 63 measures showing good repeatability (ICC > 0.75). Cross-body movements in the Pasta Box Task had longer movement times and reduced percent-to-peak hand velocity values. The Cup Transfer Task showed decoupling of peak grip aperture and peak hand deceleration for all movements. Movements requiring the clearing of an obstacle while transporting an object displayed a double velocity peak and typically a longer deceleration phase. Discussion Normative hand kinematics for two standardized functional tasks challenging various aspects of hand-object interactions important for ADLs showed excellent repeatability. The consistency in normative task performance across a variety of task demands shows promise as a potential outcome assessment for populations with upper limb impairment.
Article
Full-text available
Objective: Sport research often requires human motion capture of an athlete. It can, however, be labour-intensive and difficult to select the right system, while manufacturers report on specifications which are determined in set-ups that largely differ from sport research in terms of volume, environment and motion. The aim of this review is to assist researchers in the selection of a suitable motion capture system for their experimental set-up for sport applications. An open online platform is initiated, to support (sport)researchers in the selection of a system and to enable them to contribute and update the overview. Design: systematic review; Method: Electronic searches in Scopus, Web of Science and Google Scholar were performed, and the reference lists of the screened articles were scrutinised to determine human motion capture systems used in academically published studies on sport analysis. Results: An overview of 17 human motion capture systems is provided, reporting the general specifications given by the manufacturer (weight and size of the sensors, maximum capture volume, environmental feasibilities), and calibration specifications as determined in peer-reviewed studies. The accuracy of each system is plotted against the measurement range. Conclusion: The overview and chart can assist researchers in the selection of a suitable measurement system. To increase the robustness of the database and to keep up with technological developments, we encourage researchers to perform an accuracy test prior to their experiment and to add to the chart and the system overview (online, open access).
Article
Full-text available
The aim of this study was to provide a detailed account of the spatial and temporal disruptions to eye-hand coordination when using a prosthetic hand during a sequential fine motor skill. Twenty-one abled-bodied participants performed 15 trials of the ‘picking up coins’ task derived from the Southampton Hand Assessment Procedure (SHAP) with their anatomic hand and with a prosthesis simulator while wearing eye-tracking equipment. Gaze behaviour results revealed that when using the prosthesis, performance detriments were accompanied by significantly greater hand-focused gaze and a significantly longer time to disengage gaze from manipulations to plan upcoming movements. Our findings highlight key metrics that distinguish disruptions to eye-hand coordination that might have implications for the training of prosthesis use.
Article
Full-text available
Understanding the brain's capacity to encode complex visual information from a scene and to transform it into a coherent perception of 3D space and into well-coordinated motor commands are among the outstanding questions in the study of integrative brain function. Eye movement methodologies have allowed us to begin addressing these questions in increasingly naturalistic tasks, where eye and body movements are ubiquitous and, therefore, the applicability of most traditional neuroscience methods restricted. This review explores foundational issues in (1) how oculomotor and motor control in lab experiments extrapolates into more complex settings and (2) how real-world gaze behavior in turn decomposes into more elementary eye movement patterns. We review the received typology of oculomotor patterns in laboratory tasks, and how they map onto naturalistic gaze behavior (or not). We discuss the multiple coordinate systems needed to represent visual gaze strategies, how the choice of reference frame affects the description of eye movements, and the related but conceptually distinct issue of coordinate transformations between internal representations within the brain.
Article
Full-text available
Many empirical researchers do not realize that the common multiway analysis of variance (ANOVA) harbors a multiple comparison problem. In the case of two factors, three separate null hypotheses are subject to test (i.e., two main effects and one interaction). Consequently, the probability of at least one Type I error (if all null hypotheses are true) is 14% rather than 5% if the three tests are independent. We explain the multiple comparison problem and demonstrate that researchers almost never correct for it. We describe one of several correction procedures (i.e., sequential Bonferroni), and show that its application alters at least one of the substantive conclusions in 45 out of 60 articles considered. An additional method to mitigate the multiplicity in multiway ANOVA is preregistration of hypotheses.
Article
Full-text available
Commercial head-mounted eye trackers provide useful features to customers in industry and research but are expensive and rely on closed source hardware and software. This limits the application areas and use of mobile eye tracking to expert users and inhibits user-driven development, customisation, and extension. In this paper we present Pupil -- an accessible, affordable, and extensible open source platform for mobile eye tracking and gaze-based interaction. Pupil comprises 1) a light-weight headset with high-resolution cameras, 2) an open source software framework for mobile eye tracking, as well as 3) a graphical user interface (GUI) to playback and visualize video and gaze data. Pupil features high-resolution scene and eye cameras for monocular and binocular gaze estimation. The software and GUI are platform-independent and include state-of-the-art algorithms for real-time pupil detection and tracking, calibration, and accurate gaze estimation. Results of a performance evaluation show that Pupil can provide an average gaze estimation accuracy of 0.6 degree of visual angle (0.08 degree precision) with a latency of the processing pipeline of only 0.045 seconds.
Article
Full-text available
Video-based gaze-tracking systems are typically restricted in terms of their effective tracking space. This constraint limits the use of eyetrackers in studying mobile human behavior. Here, we compare two possible approaches for estimating the gaze of participants who are free to walk in a large space whilst looking at different regions of a large display. Geometrically, we linearly combined eye-in-head rotations and head-in-world coordinates to derive a gaze vector and its intersection with a planar display, by relying on the use of a head-mounted eyetracker and body-motion tracker. Alternatively, we employed Gaussian process regression to estimate the gaze intersection directly from the input data itself. Our evaluation of both methods indicates that a regression approach can deliver comparable results to a geometric approach. The regression approach is favored, given that it has the potential for further optimization, provides confidence bounds for its gaze estimates and offers greater flexibility in its implementation. Open-source software for the methods reported here is also provided for user implementation.
Article
Full-text available
Most of the time, the human visual system computes perceived size by scaling the size of an object on the retina with its perceived distance. There are instances, however, in which size-distance scaling is not based on visual inputs but on extraretinal cues. In the Taylor illusion, the perceived afterimage that is projected on an observer's hand will change in size depending on how far the limb is positioned from the eyes-even in complete darkness. In the dark, distance cues might derive from hand position signals either by an efference copy of the motor command to the moving hand or by proprioceptive input. Alternatively, there have been reports that vergence signals from the eyes might also be important. We performed a series of behavioral and eye-tracking experiments to tease apart how these different sources of distance information contribute to the Taylor illusion. We demonstrate that, with no visual information, perceived size changes mainly as a function of the vergence angle of the eyes, underscoring its importance in size-distance scaling. Interestingly, the strength of this relationship decreased when a mismatch between vergence and proprioception was introduced, indicating that proprioceptive feedback from the arm also affected size perception. By using afterimages, we provide strong evidence that the human visual system can benefit from sensory signals that originate from the hand when visual information about distance is unavailable.
Article
Full-text available
Vision plays a crucial role in guiding motor actions. But sometimes we cannot use vision and must rely on our memory to guide action-e.g. remembering where we placed our eyeglasses on the bedside table when reaching for them with the lights off. Recent studies show subjects look towards the index finger grasp position during visually-guided precision grasping. But, where do people look during memory-guided grasping? Here, we explored the gaze behaviour of subjects as they grasped a centrally placed symmetrical block under open- and closed-loop conditions. In Experiment 1, subjects performed grasps in either a visually-guided task or memory-guided task. The results show that during visually-guided grasping, gaze was first directed towards the index finger's grasp point on the block, suggesting gaze targets future grasp points during the planning of the grasp. Gaze during memory-guided grasping was aimed closer to the blocks' centre of mass from block presentation to the completion of the grasp. In Experiment 2, subjects performed an 'immediate grasping' task in which vision of the block was removed immediately at the onset of the reach. Similar to the visually-guided results from Experiment 1, gaze was primarily directed towards the index finger location. These results support the 2-stream theory of vision in that motor planning with visual feedback at the onset of the movement is driven primarily by real-time visuomotor computations of the dorsal stream, whereas grasping remembered objects without visual feedback is driven primarily by the perceptual memory representations mediated by the ventral stream.
Conference Paper
Full-text available
Human eye gaze is a strong candidate to create a new application area based on human-computer interaction. To implement a really practical gaze-based interaction system, gaze detection must be realized without placing any restriction on the user's behavior or comfort. This paper describes a gaze tracking system that offers freehead, simple personal calibration. It does not require the user wear anything on her head, and she can move her head freely. Personal calibration takes only a very short time; the user is asked to look at two markers on the screen. An experiment shows that the accuracy of the implemented system is about 1.0 degrees (view angle).
Article
Full-text available
It is well established that patients with hemispatial neglect present with severe visuospatial impairments, but studies that have directly investigated visuomotor control have revealed diverging results, with some studies showing that neglect patients perform relatively better on such tasks. The present study compared the visuomotor performance of patients with and without neglect after right-hemisphere stroke with those of age-matched controls. Participants were asked to point either directly towards targets or halfway between two stimuli, both with and without visual feedback during movement. Although we did not find any neglect-specific impairment, both patient groups showed increased reaction times to leftward stimuli as well as decreased accuracies for open loop leftward reaches. We argue that these findings agree with the view that neglect patients code spatial parameters for action veridically. Moreover, we suggest that lesions in the right hemisphere may cause motor deficits irrespective of the presence of neglect and we performed an initial voxel-lesion symptom analysis to assess this. Lesion-symptom analysis revealed that the reported deficits did not result from damage to neglect-associated areas alone, but were further associated with lesions to crucial nodes in the visuomotor control network (the basal ganglia as well as occipito-parietal and frontal areas).
Article
Full-text available
It is well known that, typically, saccadic eye movements precede goal-directed hand movements to a visual target stimulus. Also pointing in general is more accurate when the pointing target is gazed at. In this study, it is hypothesized that saccades are not only preceding pointing but that gaze also is stabilized during pointing in humans. Subjects, whose eye and pointing movements were recorded, had to make a hand movement and a saccade to a first target. At arm movement peak velocity, when the eyes are usually already fixating the first target, a new target appeared, and subjects had to make a saccade toward it (dynamical trial type). In the statical trial type, a new target was offered when pointing was just completed. In a control experiment, a sequence of two saccades had to be made, with two different interstimulus intervals (ISI), comparable with the ISIs found in the first experiment for dynamic and static trial types. In a third experiment, ocular fixation position and pointing target were dissociated, subjects pointed at not fixated targets. The results showed that latencies of saccades toward the second target were on average 155 ms longer in the dynamic trial types, compared with the static trial types. Saccades evoked during pointing appeared to be delayed with approximately the remaining deceleration time of the pointing movement, resulting in "normal" residual saccadic reaction times (RTs), measured from pointing movement offset to saccade movement onset. In the control experiment, the latency of the second saccade was on average only 29 ms larger when the two targets appeared with a short ISI compared with trials with long ISIs. Therefore the saccadic refractory period cannot be responsible for the substantially bigger delays that were found in the first experiment. The observed saccadic delay during pointing is modulated by the distance between ocular fixation position and pointing target. The largest delays were found when the targets coincided, the smallest delays when they were dissociated. In sum, our results provide evidence for an active saccadic inhibition process, presumably to keep steady ocular fixation at a pointing target and its surroundings. Possible neurophysiological substrates that might underlie the reported phenomena are discussed.
Article
Full-text available
The aim of this study was to determine the pattern of fixations during the performance of a well-learned task in a natural setting (making tea), and to classify the types of monitoring action that the eyes perform. We used a head-mounted eye-movement video camera, which provided a continuous view of the scene ahead, with a dot indicating foveal direction with an accuracy of about 1 deg. A second video camera recorded the subject's activities from across the room. The videos were linked and analysed frame by frame. Foveal direction was always close to the object being manipulated, and very few fixations were irrelevant to the task. The first object-related fixation typically led the first indication of manipulation by 0.56 s, and vision moved to the next object about 0.61 s before manipulation of the previous object was complete. Each object-related act that did not involve a waiting period lasted an average of 3.3 s and involved about 7 fixations. Roughly a third of all fixations on objects could be definitely identified with one of four monitoring functions: locating objects used later in the process, directing the hand or object in the hand to a new location, guiding the approach of one object to another (e.g. kettle and lid), and checking the state of some variable (e.g. water level). We conclude that although the actions of tea-making are 'automated' and proceed with little conscious involvement, the eyes closely monitor every step of the process. This type of unconscious attention must be a common phenomenon in everyday life.
Article
Full-text available
Two recent studies have investigated the relations of eye and hand movements in extended food preparation tasks, and here the results are compared. The tasks could be divided into a series of actions performed on objects. The eyes usually reached the next object in the sequence before any sign of manipulative action, indicating that eye movements are planned into the motor pattern and lead each action. The eyes usually fixated the same object throughout the action upon it, although they often moved on to the next object in the sequence before completion of the preceding action. The specific roles of individual fixations could be identified as locating (establishing the locations of objects for future use), directing (establishing target direction prior to contact), guiding (supervising the relative movements of two or three objects) and checking (establishing whether some particular condition is met, prior to the termination of an action). It is argued that, at the beginning of each action, the oculomotor system is supplied with the identity of the required object, information about its location, and instructions about the nature of the monitoring required during the action. The eye movements during this kind of task are nearly all to task-relevant objects, and thus their control is seen as primarily 'top-down', and influenced very little by the 'intrinsic salience' of objects.
Article
Full-text available
One of the most important functions of vision is to direct actions to objects. However, every time that vision is used to guide an action, retinal motion signals are produced by the movement of the eye and head as the person looks at the object or by the motion of other objects in the scene. To reach for the object accurately, the visuomotor system must separate information about the position of the stationary target from background retinal motion signals-a long-standing problem that is poorly understood. Here we show that the visuomotor system does not distinguish between these two information sources: when observers made fast reaching movements to a briefly presented stationary target, their hand shifted in a direction consistent with the motion of a distant and unrelated stimulus, a result contrary to most other findings. This can be seen early in the hand's trajectory (approximately 120 ms) and occurs continuously from programming of the movement through to its execution. The visuomotor system might make use of the motion signals arising from eye and head movements to update the positions of targets rapidly and redirect the hand to compensate for body movements.
Article
Full-text available
The authors provide evidence that choking under pressure is associated with changes in visual attention. Ten elite biathlon shooters were tested under separate low-pressure (LP) and high-pressure (HP) conditions after exercising on a cycle ergometer at individually prescribed power output (PO) levels of 55%, 70%, 85%, and 100% of their maximum oxygen uptake. The authors determined difference scores by subtracting each athlete's score in the LP condition from his or her score in the HP condition for heart rate (d-HR), rate of perceived exertion (d-RPE), cognitive anxiety (d-CA), and cognitive worry (d-CW), and final fixation on the target or quiet eye gaze (d-QE). Using regression analysis, the authors determined predictors of accuracy for each HP PO level. At PO 55%, the authors found 3 predictors (d-HR, d-RPE, d-QE) that accounted for .62 of the adjusted R2 variance. Accuracy was higher when d-QE was lower and d-RPE and d-HR were higher than the values found in the LP condition. At PO 100%, however, an increase in d-QE and d-RPE accounted for .58 of the adjusted R2 variance. Accuracy was dependent on an increase in external focus (positive d-QE) independently of heart rate. At the highest PO level, directing visual attention externally to critical task information appeared to insulate the athletes from choking under HP.
Article
Everyday tasks such as catching a ball appear effortless, but in fact require complex interactions and tight temporal coordination between the brain’s visual and motor systems. What makes such interceptive actions particularly impressive is the capacity of the brain to account for temporal delays in the central nervous system—a limitation that can be mitigated by making predictions about the environment as well as one’s own actions. Here, we wanted to assess how well human participants can plan an upcoming movement based on a dynamic, predictable stimulus that is not the target of action. A central stationary or rotating stimulus determined the probability that each of two potential targets would be the eventual target of a rapid reach-to-touch movement. We examined the extent to which reach movement trajectories convey internal predictions about the future state of dynamic probabilistic information conveyed by the rotating stimulus. We show that movement trajectories reflect the target probabilities determined at movement onset, suggesting that humans rapidly and accurately integrate visuospatial predictions and estimates of their own reaction times to effectively guide action.
Article
Quantifying angular joint kinematics of the upper body is a useful method for assessing upper limb function. Joint angles are commonly obtained via motion capture, tracking markers placed on anatomical landmarks. This method is associated with limitations including administrative burden, soft tissue artifacts, and intra- and inter-tester variability. An alternative method involves the tracking of rigid marker clusters affixed to body segments, calibrated relative to anatomical landmarks or known joint angles. The accuracy and reliability of applying this cluster method to the upper body has, however, not been comprehensively explored. Our objective was to compare three different upper body cluster models with an anatomical model, with respect to joint angles and reliability. Non-disabled participants performed two standardized functional upper limb tasks with anatomical and cluster markers applied concurrently. Joint angle curves obtained via the marker clusters with three different calibration methods were compared to those from an anatomical model, and between-session reliability was assessed for all models. The cluster models produced joint angle curves which were comparable to and highly correlated with those from the anatomical model, but exhibited notable offsets and differences in sensitivity for some degrees of freedom. Between-session reliability was comparable between all models, and good for most degrees of freedom. Overall, the cluster models produced reliable joint angles that, however, cannot be used interchangeably with anatomical model outputs to calculate kinematic metrics. Cluster models appear to be an adequate, and possibly advantageous alternative to anatomical models when the objective is to assess trends in movement behavior.
Article
Most gaze tracking techniques estimate gaze points on screens, on scene images, or in confined spaces. Tracking of gaze in open-world coordinates, especially in walking situations, has rarely been addressed. We use a head-mounted eye tracker combined with two inertial measurement units (IMU) to track gaze orientation relative to the heading direction in outdoor walking. Head movements relative to the body are measured by the difference in output between the IMUs on the head and body trunk. The use of the IMU pair reduces the impact of environmental interference on each sensor. The system was tested in busy urban areas and allowed drift compensation for long (up to 18 min) gaze recording. Comparison with ground truth revealed an average error of 3.3° while walking straight segments. The range of gaze scanning in walking is frequently larger than the estimation error by about one order of magnitude. Our proposed method was also tested with real cases of natural walking and it was found to be suitable for the evaluation of gaze behaviors in outdoor environments.
Article
We present a new calibration method to combine a mobile eye tracker with an external tracking system to obtain a 3D gaze vector. Our method captures calibration points of varying distances, pupil positions and head positions/orientations. With these data we can determine the eye position relative to the user's head position without separate manual eye-position measurements. For this approach, it is not necessary to know the orientation of the eye coordinate system in advance. In addition to the calibration of the external tracking system calibration, we can calibrate the head-tracked eye tracker in a one-step process, requiring the user to look at the calibration points. No extra calibration of the eye tracker is necessary, if the raw pupil position in the eye-camera is available from the eye tracker. The calibrated system allows us to estimate the 3D gaze vector for a user who can move freely within the range of the external tracking system. Our evaluation shows that the average accuracy of the visual angle is better than one degree in a self evaluation and approximately two degrees under unrestrained head movement.
Article
This paper presents the case for a functional account of vision. A variety of studies have consistently revealed "change blindness" or insensitivity to changes in the visual scene during an eye movement. These studies indicate that only a small part of the information in the scene is represented in the brain from moment to moment. It is still unclear, however, exactly what is included in visual representations. This paper reviews experiments using an extended visuo-motor task, showing that display changes affect performance differently depending on the observer's place in the task. These effects are revealed by increases in fixation duration following a change. Different task-dependent increases suggest that the visual system represents only the information that is necessary for the immediate visual task. This allows a principled exploration of the stimulus properties that are included in the internal visual representation. The task specificity also has a more general implication that vision should be conceptualized as an active process executing special purpose "routines" that compute only the currently necessary information. Evidence for this view and its implications for visual representations are discussed. Comparison of the change blindness phenomenon and fixation durations shows that conscious report does not reveal the extent of the representations computed by the routines.
Article
Human eye gaze is a strong candidate to create a new application area based on human-computer interaction. To implement a really practical gaze-based interaction system, gaze detection must be re-alized without placing any restriction on the user's behavior or com-fort. This paper describes a gaze tracking system that offers free-head, simple personal calibration. It does not require the user wear anything on her head, and she can move her head freely. Personal calibration takes only a very short time; the user is asked to look at two markers on the screen. An experiment shows that the accuracy of the implemented system is about 1.0 degrees (view angle).
Article
The aim of this study was to test the predictions of attentional control theory using the quiet eye period as an objective measure of attentional control. Ten basketball players took free throws in two counterbalanced experimental conditions designed to manipulate the anxiety they experienced. Point of gaze was measured using an ASL Mobile Eye tracker and fixations including the quiet eye were determined using frame-by-frame analysis. The manipulation of anxiety resulted in significant reductions in the duration of the quiet eye period and free throw success rate, thus supporting the predictions of attentional control theory. Anxiety impaired goal-directed attentional control (quiet eye period) at the expense of stimulus-driven control (more fixations of shorter duration to various targets). The findings suggest that attentional control theory may be a useful theoretical framework for examining the relationship between anxiety and performance in visuomotor sport skills.
Article
The aim of the present study was to ascertain the neural correlates for the integration of visual information with the control of the reach-to-grasp action in the healthy human brain. Nine adult subjects (18-38 years; four females and five males) were scanned using functional magnetic resonance imaging while reaching-to-grasp a three-dimensional target. Results demonstrated differential activation of the parietal cortices according to the number of potential targets to be taken into account before movement initiation and the variability of target location. Comparing conditions where a target object that can appear at an unpredictable location with conditions where the target object appears at a predictable location revealed activations in the left superior parietal lobule, the left parieto-occipital sulcus and the right intraparietal sulcus. Results are discussed in terms of visual selective attention and action planning.
Article
In real life situations large gaze saccades may involve rotations of the trunk, as well as the eyes and head. When this happens the rotation of the head-in-space is similar whether or not the trunk is also rotating. However, the rotation of the head on the trunk (i.e. the neck movement) is very different in the two circumstances. For similar head-in-space rotations to occur, the neck and trunk movements cannot simply add independently: they must be coordinated. It is argued that this is achieved via a feedback loop in which the semi-circular canals monitor the rotation of the head-in-space, and the neck is driven by an error signal representing the difference between the intended head-in-space trajectory and the actual trajectory. This mechanism, which is essentially the same as the vestibulo-collic reflex, nulls out disturbances to the head-in-space trajectory, whether these are caused by active or passive trunk rotation.
Article
The classic experiments of Yarbus over 50 years ago revealed that saccadic eye movements reflect cognitive processes. But it is only recently that three separate advances have greatly expanded our understanding of the intricate role of eye movements in cognitive function. The first is the demonstration of the pervasive role of the task in guiding where and when to fixate. The second has been the recognition of the role of internal reward in guiding eye and body movements, revealed especially in neurophysiological studies. The third important advance has been the theoretical developments in the fields of reinforcement learning and graphic simulation. All of these advances are proving crucial for understanding how behavioral programs control the selection of visual information.
Article
We all share a desire to understand and predict human cognition and behaviour as it occurs within complex real-world situations. This target article seeks to open a dialogue with our colleagues regarding this common goal. We begin by identifying the principles of most lab-based investigations and conclude that adhering to them will fail to generate valid theories of human cognition and behaviour in natural settings. We then present an alternative set of principles within a novel research framework called 'Cognitive Ethology'. We discuss how Cognitive Ethology can complement lab-based investigations, and we show how its levels of description and explanation are distinct from what is typically employed in lab-based research.
Hidden multiplicity in exploratory multiway ANOVA: prevalence and remedies
  • A O Cramer
  • D Van Ravenzwaaij
  • D Matzke
  • H Steingroever
  • R Wetzels
  • R P Grasman
  • . . Wagenmakers
  • AO Cramer
JASP (Version )[Computer software
  • Jasp Team