Definition and Evaluation of an Interaction Model for Three-dimensional Interface.

Conference Paper (PDF Available) · January 1999with23 Reads
Source: DBLP
Conference: WebNet World Conference on the WWW and Internet, Volume: 1999

Full-text (PDF)

Available from: Patricia Plénacoste
Definition and Evaluation of an Interaction Model for a Three-dimensional
Interface
Cédric Dumas, Patricia Plénacoste, Catherine Demarey
LIFL
Batiment M3
Cité Scientifique
F-59 650 Villeneuve d’Ascq
France
Email: patricia.plenacoste@univ-lille1.fr ; dumas@lifl.fr
Abstract
We have elaborated a new 3D virtual workspace for distant meetings. 2D/3D documents can be integrated. The interaction
must be generic so that the users can manipulate all documents in the same way. From real-time computer graphics
techniques and users’abilities in a 3D virtual environment, we have defined a model on interaction together with the spatial
organisation of the workspace. We emphasize the interaction based on two-handed direct manipulation, the use of two 3D
input devices, simple metaphors, suggestive visual cues. Three experimentation tested our model. The first concerned the
structure of the visual field and motor performances. The results show the latter is influenced by the visual context. The
second one studies what are the relevant perceptive hints to enhance the pointing task performance according to the kind of
input device used. The results show that isotonic devices are superior and that shadows are helpful to guide the action. The
last one investigates the effect of shape of the shadow. The results suggest that the shape shadow was processing as a action's
affordance.
A 3D INTERACTION MODEL
We define navigation as changes in the user's point of view. Interaction refers to how the user acts in the scene:
the user manipulates objects without changing his overall point of view of the scene. Navigation and interaction
are intrinsically linked; in order to interact with the interface the user has to be able to move within the interface.
Unfortunately, the existence of a third dimension creates new problems with positioning and with user
orientation (Hinckley, 1994); these need to be dealt with in order to avoid disorienting the user. This is especially
true for our interface, where the main objective is not to navigate within the interface, but rather to act on the
interface. This entails designing a coordinate frame where navigation within a restricted space is adequate and
easy. With an 3D isotonic input device (Zhai, 1998) like Ascension™ trackers, translation of the dominant hand's
movement is immediately reflected in the interface by the pointer (figure 1). In a real life situation, users cannot
go search for documents or tools without getting up from the table. With our room metaphor, the user does not
have to navigate to find objects, he can select them directly with the pointer, it can be moved throughout the
entire meeting room. Although the appropriate input device is available to the user he may still lose his pointer
when moving around in the interface. There are several ways of dealing with this problem. First, pointer
orientation is used to indicate any change in direction and to enhance the impression of movement. Secondly, we
use shading effects and the pointer's shadow is projected onto the floor. This helps the user to perceive meeting
room depth accurately and to get his bearings quickly and easily (Kersten, 1997).
Figure 1: pointer on an H20 molecule, with progressive bounding-boxes surrounding objects
Our model uses visual cues to show that an object has been selected or that it can be selected (figure 1): a
graphical representation of a box appears progressively around the object. The closer the pointer is to the object,
Advertisement:
the more the surrounding box is visible and closer to the object. This progressive bounding box system greatly
simplifies the manipulation of the pointer.
Once an object is selected, the user may want to manipulate it. In order to maintain direct manipulation and to
avoid widgets, we use an isometric device, a 3D trackball, in the non-dominant hand to apply actions to the
object (like rotation, etc.). So, our model uses bimanual interaction (Kabbash, 1994), because the user has more
interaction possibilities and it simulates reality. It should reduce dominant hand movement and thus increase
precision for object manipulation (with the isometric device).
EVALUATION
We show here the first experimentation but the poster will give the three experimentations that have tested our
model at different levels of the conception. Recent developments for a 3D interactions showed that people had
trouble in identifying the depth of a visual scene (Wanger & al, 1992; Carr & England, 1995) when using 3D
input devices. The shadow enables users to infer both the position and the location of an object. The gap between
an object and its shadow indicates the object’s height above the ground plane. The location of a shadow on the
ground plane indicates the object’s distance. We hypothesize that shadow cues provided by objects and the
dynamic shadow cues provide by the pointer influence the access to the depth and distance perception. They
enhance the guidance for pointing an object in 3D scene and the pointing performance. The user is enabled to
use static shadow and dynamic shadow information to guide actions, to achieve a particular goal and make
decisions in a three-dimensional world.
The aim of this first experimentation was to analyse the effects of manipulating the contextual cues of depth on
the accuracy of an aiming task. Expert and Novice subjects performed a pointing task in a three-dimensional
environment. For this task, we used two kind of input devices : a combinaison mouse and keyboard, and isotonic
device. The target was a cube presented in a 3D room. The 3D context is configured by three factors which are
texture, dynamic shadow or no shadow to the pointer and static shadow or no shadow to the target. The pointer
shadow is considered as the shadow follows the pointer moves. These three variables were allowed to build eight
experimental conditions. The results have shown the superiority of using isotonic input device and the usefulness
of the shadow for a guidance in a 3D computer environment.
CONCLUSION
Our interaction model is able to exploit the advantages of two sorts of devices (isotonic and isometric). We can
manipulate easily 3D documents. We plan to further evaluate our model, especially with two-handed
manipulation.
REFERENCES
Carr, K and England, (1995). Simulated and virtual realities. Elements of perception. Taylor & Francis.
Kersten, D., & Mamassian, P., & Knill, D.C. (1997). Moving cast shadows induce apparent motion in depth.
Perception, 26, 171-192.
Hinckley, K., & Pausch, R., & Goble, J.C., & Kassel, N.F. (1994). A Survey of Design Issues in Spatial Input,
Proceedings of UIST’94, 213-222.
Kabbash, P., Buxton, W. and Sellen, A. (1994). Two-Handed Input in a Compound Task., Proceedings of
CHI’94, 417-423.
Wanger L R, Ferwerda J A, and Greenberg D P (1992) Perceiving spatial relationships in computer-generated
images. IEEE Computer Graphics & Applications, 44-57.
Zhai S. (1998). User Performance in Relation to 3D Input Device Design. ACM Computer Graphics, 32(4), ,50-
54.
  • [Show abstract] [Hide abstract] ABSTRACT: RESUME Si la technologie actuelle nous permet de visualiser facilement des objets tridimensionnels animés, il n'en va pas de même pour interagir avec eux. Nous explorons ainsi de nouveaux modes d'interaction liés à la manipulation de documents 3D. Les interfaces de travail tridimensionnelles nous obligent à revoir les modes d'interactions 2D classiques. C'est dans ce sens que nous travaillons sur les indices perceptifs, qui permettent à l'utilisateur de saisir la profondeur dans l'affichage d'une scène 3D virtuelle et la localisation des objets 3D à manipuler. Nos premières études ont mis en avant la prédominance des ombres portées par rapport aux indices statiques (qui composent le décor). Cela nous a incité à développer d'autres indices dynamiques. Le premier présenté ici est la boîte englobante progressive, qui aide l'utilisateur à évaluer la distance du pointeur à l'objet et les distances relatives entre objets. Le second est la mobilité relative, qui permet à travers une navigation limitée autour de l'objet, de saisir globalement sa forme.
    Full-text · Article ·