ArticlePDF Available

Modeling the Coordinated Movements of the Head and Hand Using Differential Inverse Kinematics

Authors:

Abstract and Figures

Hand reach movements for manual work, vehicle operation, and manipulation of controls are planned and guided by visual images actively captured through eye and head movements. It is hypothesized that reach movements are based on the coordination of multiple subsystems that pursue the individual goals of visual gaze and manual reach. In the present study, shared control coordination was simulated in reach movements modeled using differential inverse kinematics. An 8-DOF model represented the torso-neck-head link (visual subsystem), and a 9-DOF model represented the torso- upper limb link (manual subsystem), respectively. Joint angles were predicted in the velocity domain via a pseudo-inverse Jacobian that weighted each link for its contribution to the movement. A secondary objective function was introduced to enable both subsystems to achieve the corresponding movement goals in a coordinated manner by manipulating redundant degrees of freedom. Simulated motions were compared to motion recordings from ten subjects performing right-hand reaches in a seated posture. Joint angles were predicted with and without the contribution of the coordination function, and model accuracy was determined using the RMS error and differences in end posture angles. The results indicated that prediction accuracy was generally better when the coordination function was included. This improvement was more pronounced for low and eccentric targets, as they required greater contribution of the joints shared by both visual and manual subsystems.
Content may be subject to copyright.
2004-01-2178
Modeling the Coordinated Movements of the Head and Hand
Using Differential Inverse Kinematics
K. Han Kim1, R. Brent Gillespie2 and Bernard J. Martin3
1,3HUMOSIM Laboratory, The University of Michigan
2Department of Mechanical Engineering, The University of Michigan
Copyright © 2004 SAE International
ABSTRACT
Hand reach movements for manual work, vehicle
operation, and manipulation of controls are planned and
guided by visual images actively captured through eye
and head movements. It is hypothesized that reach
movements are based on the coordination of multiple
subsystems that pursue the individual goals of visual
gaze and manual reach. In the present study, shared
control coordination was simulated in reach movements
modeled using differential inverse kinematics. An 8-DOF
model represented the torso-neck-head link (visual
subsystem), and a 9-DOF model represented the torso-
upper limb link (manual subsystem), respectively. Joint
angles were predicted in the velocity domain via a
pseudo-inverse Jacobian that weighted each link for its
contribution to the movement. A secondary objective
function was introduced to enable both subsystems to
achieve the corresponding movement goals in a
coordinated manner by manipulating redundant degrees
of freedom. Simulated motions were compared to motion
recordings from ten subjects performing right-hand
reaches in a seated posture. Joint angles were predicted
with and without the contribution of the coordination
function, and model accuracy was determined using the
RMS error and differences in end posture angles. The
results indicated that prediction accuracy was generally
better when the coordination function was included. This
improvement was more pronounced for low and
eccentric targets, as they required greater contribution of
the joints shared by both visual and manual subsystems.
INTRODUCTION
Motion prediction models of lifting, reaching, or pointing
tasks have been commonly based on a multiple body
link system with a single end-effector [1, 2, 3, 4].
However, even simple reaching movements may include
multiple task components, other than moving the hand
toward the goal target. For example, reaching involves
the movement of the head and the eyes to capture
images of the environment and build an internal
representation of the space in which hand movements
are planed and guided. It has been shown that head
and/or eye movements are modulated by the movement
of the whole body and the hand [5, 6, 7]. In addition,
whole body and/or hand movements are also adjusted
as a function of visual perception of the environment [8,
9, 10]. Hence it may be suggested that the central
nervous system (CNS), while planning and executing a
movement, simultaneously coordinates multiple
subsystems that pursue individual goals (guiding the
hand, displacing the gaze, etc) in order to achieve the
general aim of the task (reaching for the target).
Seated reach movements include the movements of the
visual sub-system (eye – head – neck – torso), and the
manual subsystem (finger – hand – forearm – upper arm
– clavicle – torso). Depending on target location and
initial posture, both systems may move synergistically (in
a same direction) or antagonistically (in different
directions) with respect to each other. Since the visual
and manual subsystems share a common link (torso), it
is hypothesized that the two subsystems negotiate the
control of this common link involved in the motion of their
respective end-effectors. The CNS should plan and
coordinate the movement of each subsystem in order to
allocate the use of the common link in such a manner
that both subsystems achieve their individual goals. The
coordination of multiple subsystems is made possible by
manipulating the redundant degrees of freedom of each
subsystem.
The aims of the present study are to 1) construct a
multibody link system representing the visual and
manual subsystems; 2) develop optimization-based
inverse kinematics models to simulate the movements of
the subsystems separately and integratively; and 3)
quantify the benefit of the integration of multiple
subsystems by comparing simulated and actual
movements.
METHODS
DIFFERENTIAL INVERSE KINEMATICS
The manual subsystem (finger-hand-forearm-upper arm-
clavicle-torso, Figure 1A) and the visual sub-system
(eye-head-neck-torso links, Figure 1B) were modeled to
represent a human subject performing a seated reach
task. The manual and visual subsystems are composed
of nine and eight revolute joints, respectively (Table 1).
Table 1. Joint composition of manual and visual subsystem
Manual subsystem joints Visual subsystem joints
qm1 Torso extension (+) qv1 Torso extension (+)
qm2 Torso lateral bending (+ left) qv2 Torso lateral bending (+ left)
qm3 Torso axial rotation (+ ccw) qv3 Torso axial rotation (+ccw)
qm4 Clavicle horizontal (+ forward) qv4 Neck vertical (+ up)
qm5 Clavicle vertical (+ up) qv5 Neck horizontal (+ left)
qm6 Shoulder flexion (+) qv6 Head extension (+)
qm7 Shoulder abduction (+) qv7 Head tilt (+ left)
qm8 Upper arm axial rotation (+ccw) qv8 Head axial rotation (+ccw)
qm9 Elbow flexion (+)
The joint angles of each subsystem are combined and
represented by a vector as follows:
=
v
m
c
q
q
q
q
where
[]
T
321 ,, qqq
c=q (torso joint angles- common in both
manual and visual subsystems)
[]
T
987654 ,,,,, mmmmmmm qqqqqq=q (clavicle-shoulder-
elbow angles: manual subsystem)
[]
T
87654 ,,,, vvvvvv qqqqq=q (neck-head angles: visual
subsystem)
A B
Figure 1. Multi-link composition of 9-dof manual subsystem (A) and 8-
dof visual subsystem (B). The arrow extending from each joint indicates
the positive direction of joint rotation in a right-hand sense.
The position of the manual subsystem end-effector is
represented by Cartesian coordinates as follows:
[
]
T
mfff )(),(),( 321 qqqp = (Eq. 1)
where f denotes a function of direct kinematics of the
movement. In contrast, the end-effector of the visual
subsystem is defined using a two-dimensional image
coordinate system [11].
=
=
y
x
z
vp
p
p
k
f
f
)(
)(
5
4
q
q
p (Eq. 2)
where the k a spatial scaling factor, and px, py, and pz
represent x-, y-, and z-coordinates of the target in a
head-centered reference frame.
In general, the velocity of the end-effector p can be
obtained by:
qqJq
q
q
p&&& )(
)( =
=i
f (Eq. 3)
where J is a Jacobian matrix. We can use Eq. 3 for both
the manual and visual subsystem separately:
qJp && mm
=
qJp && vv
=
(Eq. 4)
where Jm and Jv
represent the Jacobian matrices for the
manual and visual subsystem, respectively.
Generally, we wish to obtain q
& as a function of p
&, but
due to the redundant degrees of freedom of the multi-
link systems, the ordinary inverse of J cannot be
obtained. Alternatively, a weighted pseudo-inverse of J
(denoted as J) may be used.
pJpJJWJWq &&& 111 )( == TT (Eq. 5)
where W is a weighting matrix that characterizes the
instantaneous contribution of each joint. The weighting
matrices for the manual subsystem were adapted from
an earlier study [2]. For the visual subsystem, regression
models of peak velocities of the joint angles, whose
measurements are described below, were used to
estimate the weighting matrices. Hence, Eq. 5 attempts
to satisfy the primary objectives of 1) obtaining joint
angles that place the end-effector at the desired
positions at a given time; 2) manipulating joint angles in
a way that the squared sum of all joint velocity is
minimized; and 3) setting the relative contribution of
each joint as determined by the weighting matrix.
Since we have redundant degrees of freedom, we can
introduce a secondary objective that determines/
reconfigures the joint angles of the linkage system
without changing the end-effector position by using a
matrix (IJJ) which projects an arbitrary vector 0
q
&into
the null space of J [12], where I is an identity matrix.
Hence for the manual subsystem:
0
)( qJJIpJq &&& mmmm += (Eq. 6)
Multiplying Jv for on both sides and solving for 0
q
&:
0
)( qJJIJpJJqJ &&& mmvmmvv +=
)()]([
0mmvvmmv pJJqJJJIJq &&& = (Eq. 7)
By plugging Eq.7 into Eq.6:
)()]()[(
mmvvmmvmmmm pJJqJJJIJJJIpJq &&&& +=
(Eq. 8)
which is simplified to Eq.9 with a gain term (
α
) scaling
the secondary objective function [13].
)()]([
mmvvmmvmm pJJpJJIJpJq &&&& +=
α
(Eq. 9)
In the present study,
α
was set to unity. Then q is
obtained from the numerical integration of q
&, and the
drift of the end-effector position was stabilized by a
feedback control algorithm [14].
MOTION MEASUREMENTS
Subjects Ten healthy subjects, 23±3 (mean ± sd) years
old with normal vision participated in the experiments.
Equipment A circular array containing visual targets was
placed on the right frontal hemisphere of the subject.
The target area extended from 0 (mid-sagittal plane) to
90° (rightward) at the distance of 60 and 95cm from the
hip-point (Figure 2A). The height of the target array was
either at eye level or 50cm below eye level (Figure 2B).
Visual targets are distributed over the array at 15°
intervals. Each target was a seven-segment digital
display (visual angle < 1°).
Procedure In a seated posture, the subjects were
asked initially to look at the home target located in the
mid-sagittal plane while both hands were resting on the
lap (Figure 2A). Then the subject reached with the right
hand for a target illuminated at a random location and
touched it with the index finger (Figure 2B). Movements
of the body links, identified by electromagnetic markers
placed on body landmarks, were recorded by a motion
capture system (Flock of BirdsTM).
RESULTS
Movements were simulated using a model of the manual
subsystem alone (Eq. 5), the visual subsystem alone
(Eq. 5), or the coordinated manual and visual
subsystems (Eq. 9). Examples of superimposed stick
figures generated by each model and the corresponding
motion measurements are presented in Figure 3. It was
observed that all models are capable of predicting the
motion trajectories similar to actual motion recordings;
however, noticeable differences between predicted and
actual motions were found for shoulder and upper arm,
and forearm links.
For torso angles, the model of the manual subsystem
alone (Figure 4B) made an accurate prediction for qm2
(lateral bending), while the model of the visual
subsystem was better at predicting qm1 (extension) and
qm3 (axial rotation). This observation is in agreement with
a direct kinematics model (Figure 1) indicating that
lateral bending of the torso may not contribute
significantly to head rotation in the direction of a target
located in the right hemisphere while it may be of
primary importance for moving the torso and the hand
toward the target. Hence it is suggested that the
coordinated model, which benefits from both individual
models (manual and visual) provides a better
“combined” accuracy for all three torso angles.
In general, the accuracy for eight of the fourteen joint
angles of the combined visuo-manual linkage system
was significantly improved by the use of a coordinated
model (Table 2). Even though qm1 (torso
flexion/extension angle) did not show a significant
difference as a main effect when contrasted by the non-
coordinated or coordinated models, the coordination ×
target height interaction effect indicated that the
coordinated model made significantly greater accurate
predictions for the low target positions, where downward
flexion movements are required for visual gaze and
manual reach. Coordination × target eccentricity
interaction effects indicated that prediction accuracy for
qm2 (torso lateral bending) increases with target
eccentricity in the coordinated model. These results
indicate that coordination improves model prediction
accuracy when the body segments are effectively
involved in a motion.
A
B
Figure 2. Configuration of targets in a top view (A) and rear view (B).
Figure 4. Model simulation result for torso angles; 1: flexion (up +), 2:
lateral bending (left+), 3: axial rotation (CCW+). Actual angles from
motion measurements (A), and predictions by a model of the manual
subsystem (B), the visual subsystem (C), and coordinated manual and
visual subsystems (D). The target was located at 90° from the mid-
sagittal plane, 50cm below eye level (distance = 60cm).
Table 2. RMS error of the joint angles predicted by models for all target
locations. Shaded rows correspond to significant improvements in
prediction accuracy by the coordinated model.
Joint Non-Coordinated
(mean ± se°)
Coordinated
(mean ± se°)
Significance
qm1 3.5±0.3 4.2±0.4 p < 0.05
qm2 2.3±0.2 3.0±0.3 Non-significant
qm3 7.7±0.9 5.1±0.5 p < 0.05
qm4 6.2±0.7 8.3±1.0 p < 0.05
qm5 8.1±1.1 7.1±1.0 p < 0.05
qm6 25.4±2.8 19.5±2.3 p < 0.05
qm7 14.2±1.6 11.8±1.1 p < 0.05
qm8 39.1±4.2 35.2±3.7 p < 0.05
qm9 14.7±0.9 11.8±1.0 p < 0.05
qv4 8.9±1.0 5.8±1.0 p < 0.05
qv5 4.0±0.5 5.1±0.6 p < 0.05
qv6 6.5±0.6 5.3±0.5 p < 0.05
qv7 7.9±0.3 7.6±0.4 Non-significant
qv8 4.1±0.5 4.0±0.4 Non-significant
A
B
C
⎯⎯→ time ▬▬▬▬ measured ▬▬▬▬ predicted
Figure 3. Stick figures generated by actual motion measurements and superimposed model predictions. The target was located in the mid-sagittal plane
at the eye level (distance = 60cm). A) manual subsystem; B) visual subsystem; C) manual + visual coordinated. The dotted lines that extend from the
cross hairs represent the head orientation vector [15].
Prediction of end-posture angles was also generally
improved by the coordinated model (Table 3). More
pronouncedly it was observed that the prediction error
for torso axial rotation (qm3) was reduced by 85% when
the coordinated model was used. Also the prediction
accuracy for qm1 (torso flexion/extension) and qm2 (torso
lateral bending) increases for targets at low height and
far eccentricity, respectively, which is consistent with the
observations about the RMS error statistics above.
Table 3. Error of the joint angles of the end posture predicted by
models for all target locations. Shaded rows correspond to significant
improvements in prediction accuracy by the coordinated model.
Joint Non-Coordinated
(mean ± se°)
Coordinated
(mean ± se°)
Significance
qm1 0.8±1.0 2.4±0.9 p < 0.05
qm2 0.7±0.9 -2.4±1.0 p < 0.05
qm3 11.5±1.5 1.7±1.2 p < 0.05
qm4 7.9±2.0 13.0±2.1 p < 0.05
qm5 13.8±1.8 12.2±1.8 p < 0.05
qm6 -41.0±4.4 -31.1±3.7 p < 0.05
qm7 24.7±3.4 14.3±3.6 p < 0.05
qm8 53.3±10.7 46.3±10.7 p < 0.05
qm9 25.8±2.0 19.7±2.4 p < 0.05
qv4 -13.6±1.9 -7.9±1.8 p < 0.05
qv5 2.0±1.2 6.2±1.1 p < 0.05
qv6 -8.2±1.4 -3.6±1.3 p < 0.05
qv7 8.2±1.2 5.7±1.4 p < 0.05
qv8 3.4±1.0 -2.0±0.9 p < 0.05
CONCLUSIONS
In visually guided reach motions, the visual and manual
subsystems act to locate the target and move the hand
to the target, respectively [16, 17]. The present model
proposes a method of incorporating multiple subsystems
with individual end-effectors using an optimization-based
inverse kinematics algorithm. In this process of
incorporation, coordination is required in order to share
common resources and subsystems that are dedicated
to manipulate their respective end-effector. The common
links may be controlled by either one of the subsystems
exclusively, while the other subsystem’s control is
restrained. Movement accuracy can be viewed as the
result of this coordination. Dominance of one subsystem
over another may be a function of tasks requirements in
a specific context.
It was found that in general the accuracy of the predicted
joint angle trajectories was better when the coordination
is introduced as a secondary cost function in the
differential inverse kinematics. Accordingly the results of
this model suggest that 1) the central controller takes
into account the constraints of each subsystem to find
an optimal set of joint angles, hence coordination can be
viewed as the compromise of shared control between
identified subsystems involved in a movement; 2) the
advantage of the coordination model is more prominent
for reach movements to low and eccentric targets. This
latter effect shows that the accuracy of the model
increases with the effective contribution of a joint to a
visually guided reach movement. Hence, accuracy of the
coordinated model may be better than it appears when
considering only the average RMS errors including all
target locations. The statistically significant interaction
effects mentioned above support this hypothesis.
ACKNOWLEDGMENTS
We would like to acknowledge and thank the partners of
the Human Motion Simulation project for their support of
the present study (DaimlerChrysler Company, EDS -
PLM Solutions, Ford Motor Company, General Motors
Corporation, International Truck and Engine
Corporation, Lockheed Martin Corporation / Sandia
National Laboratories, University of Michigan Automotive
Research Center, US Army – Tank-Automotive and
Armaments Command, and US Postal Service).
REFERENCES
1. Dysart, M. J., and Woldstad, J. C. 1996, Posture
prediction for static sagittal-plane lifting, Journal of
Biomechanics, 29(10), 1393-1397.
2. Zhang, X., Kuo, A. D., and Chaffin, D. B. 1999,
Optimization-based differential kinematic modeling
exhibits a velocity-control strategy for dynamic
posture determination in seated reaching
movements, Journal of Biomechanics, 31, 1035-
1042.
3. Wang, X., 1999, Behavior-based inverse kinematics
algorithm to predict arm prehension postures for
computer-aided ergonomic evaluation, Journal of
Biomechanics, 32(5), 453-460.
4. Abdel-Malek, K., Yu, W., Jaber, J., and Duncan, J.
2001, Realistic posture prediction for maximum
dexterity, SAE Technical Papers Series, 2001-01-
2110 (Warrendale, PA: Society of Automotive
Engineers).
5. Kim, K., Martin, B. J., and Park, W. 2000, Head
orientation in visually guided tasks, SAE Technical
Papers Series, 2000-01-2174 (Warrendale, PA:
Society of Automotive Engineers).
6. Delleman, N. J., Huysmans, M. A., and Kujit-Evers,
L. F. M. 2001, Postural behaviour in static gazing
sidewards, SAE Technical Papers Series, 2001-01-
2093 (Warrendale, PA: Society of Automotive
Engineers).
7. Tipper, S. P., Howard, L. A., and Paul, M. A. 2001,
Reaching affects saccade trajectories. Experimental
Brain Research, 136(2), 241-249.
8. Peterka, R. J., and Benolken, M. S. 1995, Role of
somatosensory and vestibular cues in attenuating
visually induced human postural sway, Experimental
Brain Research, 105(1), 101-110.
9. Cohn, J. V., DiZio, P., and Lackner J. R. 2000,
Reaching during virtual rotation: context specific
compensations for expected coriolis forces, Journal
of Neurophysiology. 83(6), 3230-3240.
10. van der Kooij, H., Jacobs, R., Koopman, B., van der
Helm, F. 2001, An adaptive model of sensory
integration in a dynamic environment applied to
human stance control, Biological Cybernetics. 84(2),
103-115.
11. Hashimoto, K. 1999, Observer-based Visual
Servoing, In Robotics and Automation. Eds: B.
Ghosh, N. Xi and T. J. Tarn, Academic Press,
London.
12. Greville, T. N. E. 1959, The pseudoinverse of a
rectangular or singular matrix and its application to
the solution of systems of linear equations. SIAM
Review, 1(1), 38-43.
13. Maciejewski, A. A., and Klein, C. A. 1985, Obstacle
avoidance for kinematically redundant manipulators
in dynamically varying environments, International
Journal of Robotics Research, 4(3), 109-117.
14. Chiacchio, P., Chiaverini, S., Sciavicco, L., and
Siciliano, B. 1991, Closed-loop inverse kinematics
schemes for constrained redundant manipulators
with task space augmentation and task priority
strategy. International Journal of Robotics Research,
10, 410–425.
15. Jampel, R. S., and Shi, D. X. 1992, The primary
position of the eyes, the resetting saccade, and the
transverse visual head plane. Investigative
Ophthalmology and Visual Science. 33(8), 2501-
2510.
16. Kim, K. H. and Martin, B. J. 2002, Coordinated
Movements of the Head in Hand Reaching Tasks.
Proceedings of the 46th Human Factors and
Ergonomics Society Conference, Baltimore,
Maryland.
17. Vercher, J. L., Magenes, G., Prablanc, C., and
Gauthier, G. M. 1994, Eye-head-hand coordination
in pointing at visual targets: Spatial and temporal
analysis. Experimental Brain Research, 99, 507-523.
CONTACT
K. Han Kim
Email: kyunghan@umich.edu
Phone: +1-734-647-3241
Address: 1205 Beal Avenue,
Ann Arbor, MI 48109-2117
USA
... The optimization-based approach, however, lends itself to easy and effective modeling of multiple end-effectors, and its advantages in this capacity are addressed in this paper. Kim and Martin (2004) present another approach for modeling multiple limbs with shared DOFs, which extends inverse kinematics solutions to solve the subsystem of each limb separately [11]. In this work, the subsystems consist of the manual subsystem, which includes the torso and right arm, and the visual subsystem, which includes the torso and neck. ...
... The optimization-based approach, however, lends itself to easy and effective modeling of multiple end-effectors, and its advantages in this capacity are addressed in this paper. Kim and Martin (2004) present another approach for modeling multiple limbs with shared DOFs, which extends inverse kinematics solutions to solve the subsystem of each limb separately [11]. In this work, the subsystems consist of the manual subsystem, which includes the torso and right arm, and the visual subsystem, which includes the torso and neck. ...
Article
Full-text available
In the field of human modeling, there is an increasing demand for predicting human postures in real time. However, there has been minimal progress with methods that can incorporate multiple limbs with shared degrees of freedom (DOFs). This paper presents an optimization-based approach for predicting postures that involve dual-arm coordination with shared DOFs, and applies this method to a 30-DOF human model. Comparisons to motion capture data provide experimental validation for these examples. We show that this optimization-based approach allows dual-arm coordination with minimal computational cost. This new approach also easily extends to models with a higher number of DOFs and additional end-effectors.
... If the posture prediction scenario changes, a new experiment has to be carried out. In the inverse kinematics approach, a set of equations is used to find a solution (Griffin, 2001;Kim, Gillespie, & Martin, 2004;Tang, Cavazza, Mountain, & Earnshaw, 1999;Tolani, Goswami, & Badler, 2000;Wang, 1999;Wang & Verriest, 1998). For the optimization-based method, the key point is to find the minimum value of a cost function by meeting all the constraint requirements. ...
Article
Full-text available
Human posture prediction can often be formulated as a nonlinear multiobjective optimization (MOO) problem. The joint displacement function is considered as a benchmark of human performance measures. When the joint displacement function is used as the objective function, posture prediction is a MOO problem. The weighted-sum method is commonly used to find a Pareto solution of this MOO problem. Within the joint displacement function, the relative value of the weights associated with each joint represents the relative importance of that joint. Usually, weights are determined by trial and error approaches. This paper presents a systematic approach via an inverse optimization approach to determine the weights for the joint displacement function in posture prediction. This inverse optimization problem can be formulated as a bi-level optimization problem. The design variables are joint angles and weights. The cost function is the summation of the differences between two set of joint angles (the design variables and the realistic posture). Constraints include (1) normalized weights within limits and (2) an inner optimization problem to solve for joint angles (predicted posture). Additional constraints such as weight limits and weight linear equality constraints, obtained through observations, are also implemented in the formulation to test the method. A 24 degree of freedom human upper body model is used to study the formulation and visualize the prediction. An in-house motion capture system is used to obtain the realistic posture. Four different percentiles of subjects are selected to run the experiment. The set of weights for the general seated posture prediction is obtained by averaging all weights for all subjects and all tasks. On the basis of obtained set of weights, the predicted postures match the experimental results well.
Article
Full-text available
The works presented in this memory concern the automatic generation of postures and human movements on industrial workstation. The aim is to animate in a realistic way a numerical mannequin in order to simulate an operator in the execution of his task. This animation must make it possible to contribute to the analysis of the biomechanical factors which can engender musculo-skeletal disorders. At rst, choices of modelling are proposed according to this speci c context. The model selected is based on a skeletal representation of the top of the body made up of rigid bodies and ideal connections, and integrates anthropometric data. Then, a general plan of laws of order based on kinematic model is developed. It is declined in several strategies which are tested on characteristic movements and workstation. These tests and the comparison with the measurements taken on human subject make it possible to evaluate the performances of the plan of order and the choices carried out. This work gave rise to a software tool, baptized OLARGE-TMS, for simulation of human movement on workstation.
Conference Paper
Although much work has been completed with modeling head-neck movements as well with studying the intricacies of vision and eye movements, relatively little research has been conducted involving how vision affects human upper-body posture. By leveraging direct human optimized posture prediction (D-HOPP), we are able to predict postures that incorporate one's tendency to actually look towards a workspace or see a target. D-HOPP is an optimization-based approach that functions in real time with Santos™, a new kind of virtual human with a high number of degrees-of-freedom and a highly realistic appearance. With this approach, human performance measures provide objective functions in an optimization problem that is solved just once for a given posture or task. We have developed two new performance measures: visual acuity and visual displacement. Although the visual-acuity performance measure is based on well-accepted published concepts, we find that it has little effect on the predicted posture when a target point is outside one's field of view. Consequently, we have developed visual displacement, which corrects this problem. In general, we find that vision alone does not govern posture. However, using multi-objective optimization, we combine visual acuity and visual displacement with other performance measures, to yield realistic and validated predicted human postures that incorporate vision.
Article
This paper presents a nonlinear inverse optimization approach to determine the weights for the joint displacement function in standing reach tasks. This inverse optimization problem can be formulated as a bi-level highly nonlinear optimization problem. The design variables are the weights of a cost function. The cost function is the weighted summation of the differences between two sets of joint angles (predicted posture and the actual standing reach posture). Constraints include the normalized weights within limits and an inner optimization problem to solve for joint angles (predicted standing reach posture). The weight linear equality constraints, obtained through observations, are also implemented in the formulation to test the method. A 52 degree-of-freedom (DOF) human whole body model is used to study the formulation and visualize the prediction. An in-house motion capture system is used to obtain the actual standing reach posture. A total of 12 subjects (three subjects for each percentile in stature of 5th percentile female, 50th percentile female, 50th percentile male and 95th percentile male) are selected to run the experiment for 30 tasks. Among these subjects one is Turkish, two are Chinese, and the rest subjects are Americans. Three sets of weights for the general standing reach tasks are obtained for the three zones by averaging all weights in each zone for all subjects and all tasks. Based on the obtained sets of weights, the predicted standing reach postures found using the direct optimization-based approach have good correlation with the experimental results. Sensitivity of the formulation has also been investigated in this study. The presented formulation can be used to determine the weights of cost function within any multi-objective optimization (MOO) problems such as any types of posture prediction and motion prediction.
Article
Les travaux présentés dans ce mémoire concernent la génération automatique de postures et mouvements humains sur poste de travail industriel. L'objectif poursuivi est d'animer de façon réaliste un mannequin numérique afin de simuler un opérateur dans l'exécution de sa tâche. Cette animation doit permettre d'aider à l'analyse des facteurs biomécaniques pouvant engendrer des Troubles Musculo-Squelettiques. Dans un premier temps, des choix de modélisation sont proposés en fonction de ce contexte spécifique. Le modèle retenu se base sur une représentation squelettique du haut du corps composé de corps rigides et de liaisons idéales et intègre des données anthropométriques. Ensuite, un schéma général de lois de commande à base de modèle cinématique est développé. Il est décliné en plusieurs stratégies qui sont testées sur des mouvements caractéristiques et sur poste de travail. Ces tests et la comparaison aux mesures effectuées sur sujet humain permettent d'évaluer les performances du schéma de commande et les choix réalisés. Ces travaux ont donné naissance à un outil logiciel, baptisé OLARGE-TMS, de simulation de mouvement humain sur poste de travail. ABSTRACT : The works presented in this memory concern the automatic generation of postures and human movements on industrial workstation. The aim is to animate in a realistic way a numerical mannequin in order to simulate an operator in the execution of his task. This animation must make it possible to contribute to the analysis of the biomechanical factors which can engender musculo-skeletal disorders. At first, choices of modelling are proposed according to this specific context. The model selected is based on a skeletal representation of the top of the body made up of rigid bodies and ideal connections, and integrates anthropometric data. Then, a general plan of laws of order based on kinematic model is developed. It is declined in several strategies which are tested on characteristic movements and workstation. These tests and the comparison with the measurements taken on human subject make it possible to evaluate the performances of the plan of order and the choices carried out. This work gave rise to a software tool, baptized OLARGE-TMS, for simulation of human movement on workstation
Article
Full-text available
Printout. Thesis (M.S.)--University of Iowa, 2005. Includes bibliographical references (leaves 91-93).
Article
Full-text available
This article presents new closed-loop schemes for solving the inverse kinematics of constrained redundant manipula tors. In order to exploit the space of redundancy, the end- effector task is suitably augmented by adding a constraint task. The success of the technique is guaranteed either by specifying the constraint task ad hoc or by resorting to a task priority strategy. Instead of previous inverse kinemat ics schemes that use the Jacobian pseudoinverse, the schemes in this work are shown to converge using the Jacobian transpose. A number of case studies illustrate different ways of solving redundancy in the context of the proposed schemes.
Article
Full-text available
The vast majority of work to date concerned with obstacle avoidance for manipulators has dealt with task descriptions in the form ofpick-and-place movements. The added flexibil ity in motion control for manipulators possessing redundant degrees offreedom permits the consideration of obstacle avoidance in the context of a specified end-effector trajectory as the task description. Such a task definition is a more accurate model for such tasks as spray painting or arc weld ing. The approach presented here is to determine the re quired joint angle rates for the manipulator under the con straints of multiple goals, the primary goal described by the specified end-effector trajectory and secondary goals describ ing the obstacle avoidance criteria. The decomposition of the solution into a particular and a homogeneous component effectively illustrates the priority of the multiple goals that is exact end-effector control with redundant degrees of freedom maximizing the distance to obstacles. An efficient numerical implementation of the technique permits sufficiently fast cycle times to deal with dynamic environments.
Article
Full-text available
An adaptive estimator model of human spatial orientation is presented. The adaptive model dynamically weights sensory error signals. More specific, the model weights the difference between expected and actual sensory signals as a function of environmental conditions. The model does not require any changes in model parameters. Differences with existing models of spatial orientation are that: (1) environmental conditions are not specified but estimated, (2) the sensor noise characteristics are the only parameters supplied by the model designer, (3) history-dependent effects and mental resources can be modelled, and (4) vestibular thresholds are not included in the model; instead vestibular-related threshold effects are predicted by the model. The model was applied to human stance control and evaluated with results of a visually induced sway experiment. From these experiments it is known that the amplitude of visually induced sway reaches a saturation level as the stimulus level increases. This saturation level is higher when the support base is sway referenced. For subjects experiencing vestibular loss, these saturation effects do not occur. Unknown sensory noise characteristics were found by matching model predictions with these experimental results. Using only five model parameters, far more than five data points were successfully predicted. Model predictions showed that both the saturation levels are vestibular related since removal of the vestibular organs in the model removed the saturation effects, as was also shown in the e xperiments. It seems that the nature of these vestibular-related threshold effects is not physical, since in the model no threshold is included. The model results suggest that vestibular-related thresholds are the result of the processing of noisy sensory and motor output signals. Model analysis suggests that, especially for slow and small movements, the environment postural orientation can not be estimated optimally, which causes sensory illusions. The model also confirms the experimental finding that postural orientation is history dependent and can be shaped by instruction or mental knowledge. In addition the model predicts that: (1) vestibular-loss patients cannot handle sensory conflicting situations and will fall down, (2) during sinusoidal support-base translations vestibular function is needed to prevent falling, (3) loss of somatosensory information from the feet results in larger postural sway for sinusoidal support-base translations, and (4) loss of vestibular function results in falling for large support-base rotations with the eyes closed. These predictions are in agreement with experimental results.
Article
Visual servo system is a robot control system that, incorporates the vision sensor in the feedback loop. Since the visual information includes considerable delay and the sampling rate of the vision sensor is much slower than that of the joint servo, estimation of the object motion in joint servo rate is desirable for high performance real time tracking. This paper proposes a linear time-invariant observer-based controller for visual servoing with redundant features. How the performance is improved by using redundant features is also described. Moreover, the effectiveness of the observer-based controller is verified by the experiments on a PUMA 560 robot.
Article
Where is my head? Knowing head orientation in space is necessary to estimate the extent of the visual field in tasks requiring visual feedback such as driving or manual materials handling. Visually guided tasks are generally dependent on head and eye movements for visual acquisition of the target, and head movements are of significant importance when target eccentricity from the neutral reference point is large. The aim of the present work was to investigate head orientation in space in hand pointing tasks and to model the head response. Standing subjects were required to direct their gaze at one of three targets, equally distributed (vertically) in the sagittal plane. The task was performed while standing a) with the arms next to the body, b) holding a load in a static condition, c) aiming at targets with a heavy or light load held in the hands. Movements of the head and the body segments were recorded by the motion capture systems. Spatial head orientation was determined as a function of target locations. When the subjects looked at the upper and lower targets the head orientation was shifted up by 2.16° and 1.84° respectively, in the loaded condition when compared to the reference condition. Similarly, when the subject aimed at the targets with the hand loaded, the head orientation was shifted up by 2.32° and 1.22° for the upper and lower targets, respectively, in the heavy load condition when compared to the light load condition. Three major indications can be derived from the above results: 1) head movements contribute significantly to target acquisition, 2) head orientation away from the straight ahead position will be accompanied by a shift of the visual field that must be taken into account for workplace design, and 3) when a load is carried in the hand, head movements may be used to compensate shifts in the center of mass and are part of the associated reorganization of posture; hence systematic head orientation changes can be predicted.
Article
This paper describes postural behaviour in static gazing sidewards. The results show that the head (supported by underlying segments) contributes at a particular rate to get the gaze onto target. This rate is reduced in the case that postural constraints are present, i.e., restricted ranges of motion of the pelvis (in sitting) and the chest (due to fixed hand positions), suggesting that postural behaviour is guided by some sort of musculoskeletal load sharing.
Article
This paper presents an efficient numerical formulation for the prediction of realistic postures. This problem is defined by the method (or procedure) used to predict the posture of a human, given a point in the reachable space. The exposition addresses (1) the determination whether a point is reachable (i.e., does is it exist within the reach envelope) and (2) the calculation of a posture for a given point. While many researchers have used either statistical models of empirical data or the traditional geometric inverse kinematics method for posture prediction, we present a method based on kinematics for modeling, but one that uses optimization of a cost function to predict a realistic posture. It is shown that this method replicates the human mind in selecting a posture from an infinite number of possible postures. We illustrate the methodology and an accompanying experimental code through a planar and a spatial example, and validation using commercial human modeling and simulation code.
Article
The purpose of the present study is to investigate movements of the head spatially and temporally coordinated with hand reach movements simulating industrial assembly tasks. The motions recorded from thirty subjects performing reach movements with the right hand toward eccentric targets indicate that 1) hand movement onset lags head movement onset with a duration proportional to target eccentricity; 2) the head does not aim directly at a target, but travels only a fraction of target eccentricity and often deviates away from the target substantially; and 3) head movements are constrained by the strategy of either controlling the head position in space or controlling head rotation about the torso. These results indicate that head movements are constrained by both visual and non-visual factors. While the major function of the head is to displace the visual gaze toward the target, non-visual constraints, which include postural coordination with whole body movements, also significantly affect head movements.
Article
Photographic and video analyses show that the primary position of the eyes is a natural constant position in alert normal humans, and the eyes are automatically saccadically reset to this position from any displacement of the visual line. The primary position is not dependent on fixation, the fusion reflex, gravity, or the head position. The primary position is defined anatomically by head and eye planes and lines that are localized by photography, magnetic resonance imaging, and x-rays of the head and neck. The eyes are in the primary position when the principal (horizontal) retinal plane is coplanar with the transverse visual head (brain) plane (TVHP), and the equatorial plane of the eye is coplanar with a fixed orbital plane (Listing's plane). Evidence is presented to indicate an active neurologic basis for the primary position instead of passive mechanical forces. A different understanding of the primary position and the conception of the TVHP may be valuable in analyzing oculomotor defects.