May 2024
·
18 Reads
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
May 2024
·
18 Reads
July 2023
·
18 Reads
·
3 Citations
April 2023
·
14 Reads
April 2023
·
8 Reads
·
2 Citations
December 2022
·
74 Reads
·
7 Citations
Reaching objects in a dynamic environment requires fast online corrections that compensate for sudden object shifts or postural changes. Previous studies revealed the key role of visually monitoring the hand-to-target distance throughout action execution. In the current study, we investigate how sensorimotor asymmetries associated with space perception, brain lateralization and biomechanical constraints, affect the efficiency of online corrections. Participants performed reaching actions in virtual reality, where the virtual hand was progressively displaced from the real hand to trigger online corrections, for which it was possible to control the total amount of the redirection and the region of space in which the action unfolded. The efficiency of online corrections and the degree of awareness of the ensuing motor corrections were taken as assessment variables. Results revealed more efficient visuo-motor corrections for actions redirected towards, rather than away from the body midline. The effect is independent on the reaching hand and the hemispace of action, making explanations associated with laterality effects and biomechanical constraints improbable. The result cannot either be accounted for by the visual processing advantage in the straight-ahead region. An explanation may be found in the finer sensorimotor representations characterizing the frontal space proximal to body, where a preference for visual processing has been documented, and where high-value functional actions, like fine manipulative skills, typically take place. This article is part of a discussion meeting issue ‘New approaches to 3D vision’.
July 2022
·
152 Reads
·
2 Citations
p>Manipulation of hand-held objects in Virtual Reality (VR) requires input tracking with high freedom of movement, as well as haptic feedback of hand-object interactions. Through our prototypes we demonstrate a pragmatic approach to haptic feedback on controllers that render human scale forces. Our devices manifest haptic simulation of compliance, texture, surface normals, sizes, weights, and kinematic forces. These are brought to bear on hand-object interaction primitives such as palpation, manipulation, grasping, squeezing, cutaneous touch, stable grip, dexterity, and precision manipulation, which are collected as a taxonomy and represent a layer between the inherent haptic properties of the objects and the hand interaction of the operator. We implement prototypes that simulate the functional affordances of each of these aspects, and characterize their performance in human perception studies. Our work offers a model of hand-object interactions that goes beyond force rendering on a finger-by-finger basis (as typical of hand exoskeletons and gloves). </p
July 2022
·
59 Reads
·
1 Citation
p>Manipulation of hand-held objects in Virtual Reality (VR) requires input tracking with high freedom of movement, as well as haptic feedback of hand-object interactions. Through our prototypes we demonstrate a pragmatic approach to haptic feedback on controllers that render human scale forces. Our devices manifest haptic simulation of compliance, texture, surface normals, sizes, weights, and kinematic forces. These are brought to bear on hand-object interaction primitives such as palpation, manipulation, grasping, squeezing, cutaneous touch, stable grip, dexterity, and precision manipulation, which are collected as a taxonomy and represent a layer between the inherent haptic properties of the objects and the hand interaction of the operator. We implement prototypes that simulate the functional affordances of each of these aspects, and characterize their performance in human perception studies. Our work offers a model of hand-object interactions that goes beyond force rendering on a finger-by-finger basis (as typical of hand exoskeletons and gloves). </p
June 2022
·
17 Reads
·
25 Citations
April 2022
·
38 Reads
·
4 Citations
Structured note-taking forms such as sketchnoting, self-tracking journals, and bullet journaling go beyond immediate capture of information scraps. Instead, hand-drawn pride-in-craftmanship increases perceived value for sharing and display. But hand-crafting lists, tables, and calendars is tedious and repetitive. To support these practices digitally, Style Blink ("Style-Blocks+Ink") explores handcrafted styling as a first-class object. Style-blocks encapsulate digital ink, enabling people to craft, modify, and reuse embellishments and decorations for larger structures, and apply custom layouts. For example, we provide interaction instruments that style ink for personal expression, inking palettes that afford creative experimentation, fillable pens that can be "loaded"with commands and actions to replace menu selections, techniques to customize inked structures post-creation by modifying the underlying handcrafted style-blocks and to re-layout the overall structure to match users' preferred template. In effect, any ink stroke, notation, or sketch can be encapsulated as a style-object and re-purposed as a tool. Feedback from 13 users show the potential of style adaptation and re-use in individual sketching practices.
April 2022
·
46 Reads
·
11 Citations
To better ground technical (systems) investigation and interaction design of cross-device experiences, we contribute an in-depth survey of existing multi-device practices, including fragmented workflows across devices and the way people physically organize and configure their workspaces to support such activity. Further, this survey documents a historically significant moment of transition to a new future of remote work, an existing trend dramatically accelerated by the abrupt switch to work-from-home (and having to contend with the demands of home-at-work) during the COVID-19 pandemic. We surveyed 97 participants, and collected photographs of home setups and open-ended answers to 50 questions categorized in 5 themes. We characterize the wide range of multi-device physical configurations and identify five usage patterns, including: partitioning tasks, integrating multi-device usage, cloning tasks to other devices, expanding tasks and inputs to multiple devices, and migrating between devices. Our analysis also sheds light on the benefits and challenges people face when their workflow is fragmented across multiple devices. These insights have implications for the design of multi-device experiences that support people's fragmented workflows.
... With an increasing number of devices in an average user's tech ecosystem, researchers are exploring ways of bridging interactions between devices (e.g., [45]). At the same time, users are learning to interact and collaborate synchronously across diferent forms of extended realities and interfaces (e.g., VR and AR; VR and desktop). ...
July 2023
... Sayara et al. [68] proposed an immersive prototyping system that uses PbD along with event-driven state machines and trigger-action authoring to enable users to design, test and deploy Compound Freehand Interactions in VR. Direct manipulation, often used to create screen-based interactive animations [32,33,36,41,54], offers an alternative to PbD to author AR/VR interactions. Recent systems have enabled the use of hand-gestures [3,16], sketching [23,72] and physical objects [53] to create and control different virtual objects. ...
April 2023
... The modification of a virtual objects' structure, its scaling or deformation, is defined in [13] as an Edition task. Rendering a hand-object edition interaction is qualified as palpation when involving a single contact [21,38,53]; or a compliance manipulation [21,29] when involving in-hand manipulations, such as a grasping task. ...
July 2022
... Users often worry that a system lacking high fidelity cannot be trusted. However, recent studies suggest fidelity is more nuanced [3], with users accepting cartoon-ish avatars or filters over raw cameras for communication despite lower fidelity [10]. One-to-one mapping is traditionally seen as the best way to preserve user motion and intentions, particularly in shared or context-free environments like 2D video conferences. ...
June 2022
... Indeed, rotation of objects is a complex task that, when performed by the NDH, relies on automatic motor control mechanisms. This may require stronger bottom-up input (and afferent sensory paths) for planning and target acquisition for grabbing an object to make the most of the rotational range of the hand, which is more difficult when the hands are not on the object [24,26]. Interestingly, the I+D problem was not aggravated with fully indirect. ...
December 2022
... This work extended Textlets [21], in which text selections are treated as persistent interactive items; based on the Instrumental Interaction model [3] and inspired by interviews with legal professionals, textlets turn concepts (such as the selected text) into objects, that can be manipulated by instruments (commands) or meta-instruments (commands that act on instruments). This approach has also been extended to digital ink with Style-Blocks+Ink [39]. Structured note-taking (sketchnoting, self-tracking, or bullet journaling) is a (mostly hand-drawn) practice where pride in craftsmanship leads to a perceived increase in the value of the created artefacts. ...
April 2022
... confined spaces with many distractions [34], relying on the portable devices available to them at the time, such as laptops, tablets, and smartphones [92]. Remote work scenarios arise from necessity (e.g., during travel, or a meeting), inspiration (e.g., being outdoors in proximity to nature), or simply from preference or health considerations (e.g., working from home). ...
April 2022
... Cross-device interactions [8] allow the seamless interactions between multiple, distributed, and inter-connected devices for collaboration [7,22,42], communication [3], and entertainment [1,5,6]. As the number of devices is continuously increasing in today's ubiquitous computing era (e.g., phone, tablet, wearable, and desktops, etc) [29,40], designing seamless interactions for this "device ecology" become increasingly important, not only in HCI research [8], but also in commercial products, such as Apple's continuity 1 and Microsoft's technology in collaboration with Android (the Surface Duo and select Samsung Devices) 2 . In particular, cross-device interactions enable a set of unique interactions and affordances for collaboration by leveraging seamless communication between multiple users and devices. ...
October 2021
... The combination of visual cues with audio narrations can also impact comprehension and recall, though this effect depends on the specific techniques and chart types employed [76]. Symbols and pictograms have been shown to enhance both comprehension and recall in visual note-taking [136]. Moreover, textual annotations that are automatically linked to visual elements using a knowledge graph, and then refined by users, can improve the accuracy and effectiveness of the visualizations [137]. ...
May 2021
... A customized XR display that is associated with a tracking system typically includes a portal or window. For example, in 2012, Samsung unveiled its Smart Window prototype [123], which is a transparent display that allows users to view a cityscape Virtual mirror, a camera, and a display that allow surgeons to view all details related to a particular medical imaging scan without moving from the operating table. ...
October 2020