Desney S. Tan’s research while affiliated with Microsoft and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (104)


Interactive concept learning in image search
  • Patent

April 2015

·

5 Reads

·

2 Citations

Desney S. Tan

·

·

·

James A. Fogarty

An interactive concept learning image search technique that allows end-users to quickly create their own rules for re-ranking images based on the image characteristics of the images. The image characteristics can include visual characteristics as well as semantic features or characteristics, or may include a combination of both. End-users can then rank or re-rank any current or future image search results according to their rule or rules. End-users provide examples of images each rule should match and examples of images the rule should reject. The technique learns the common image characteristics of the examples, and any current or future image search results can then be ranked or re-ranked according to the learned rules.


Multi-frame depth image information identification
  • Patent
  • Full-text available

February 2015

·

10 Reads

Embodiments of the present invention relate to systems, methods, and computer storage media for identifying, authenticating, and authorizing a user to a device. A dynamic image, such as a video captured by a depth camera, is received. The dynamic image provides data from which geometric information of a portion of a user may be identified as well as motion information of a portion of the user may be identified. Consequently, a geometric attribute is identified from the geometric information. A motion attribute may also be identified from the motion information. The geometric attribute is compared to one or more geometric attributes associated with authorized users. Additionally, the motion attribute may be compared to one or more motion attributes associated with the authorized users. A determination may be made that the user is an authorized user. As such the user is authorized to utilize functions of the device.

Download

Interactive optimization of the behavior of a system

November 2014

·

9 Reads

An interactive tool is described for modifying the behavior of a system, such as, but not limited to, the behavior of a classification system. The tool uses an interface mechanism to present a current global state of the system. The tool accepts one or more refinements to this global state, e.g., by accepting individual changes to parameter settings that are presented by the interface mechanism. Based on this input, the tool computes and displays the global implications of the updated parameter settings. The process of iterating over one or more cycles of user updates, followed by computation and display of the implications of the attempted refinements, has the effect of advancing the system towards a global state that exhibits desirable behavior.


User control gesture detection

June 2014

·

24 Reads

The description relates to user control gestures. One example allows a speaker and a microphone to perform a first functionality. The example simultaneously utilizes the speaker and the microphone to perform a second functionality. The second functionality comprises capturing sound signals that originated from the speaker with the microphone and detecting Doppler shift in the sound signals. It correlates the Doppler shift with a user control gesture performed proximate to the computer and maps the user control gesture to a control function.


Method and system for meshing human and computer competencies for object categorization

April 2014

·

6 Reads

The subject disclosure relates to a method and system for visual object categorization. The method and system include receiving human inputs including data corresponding to passive human-brain responses to visualization of images. Computer inputs are also received which include data corresponding to outputs from a computerized vision-based processing of the images. The human and computer inputs are processing so as to yield a categorization for the images as a function of the human and computer inputs.


Experimental research in HCI

March 2014

·

306 Reads

·

56 Citations

In Experiments, researchers set up comparable situations in which they carefully manipulate variables and collect people’s behavior in each condition. Experiments are very effective in determining causation in controlled situations and complement techniques that investigate ongoing behavior in more natural settings. For example, experiments are excellent for determining whether increased audio quality reduces blood pressure of participants in a video conference, and can add important insights to the larger question of when people choose video conferences over audio-only ones.


Sensing user input using the body as an antenna

March 2014

·

20 Reads

A human input system is described herein that provides an interaction modality that utilizes the human body as an antenna to receive electromagnetic noise that exists in various environments. By observing the properties of the noise picked up by the body, the system can infer human input on and around existing surfaces and objects. Home power lines have been shown to be a relatively good transmitting antenna that creates a particularly noisy environment. The human input system leverages the body as a receiving antenna and electromagnetic noise modulation for gestural interaction. It is possible to robustly recognize touched locations on an uninstrumented home wall using no specialized sensors. The receiving device for which the human body is the antenna can be built into common, widely available electronics, such as mobile phones or other devices the user is likely to commonly carry.


Exploring data using multiple machine-learning models

November 2013

·

7 Reads

·

Kayur Dushyant Patel

·

Desney S. Tan

·

[...]

·

James Anthony Fogarty

A multiple model data exploration system and method for running multiple machine-learning models simultaneously to understand and explore data. Embodiments of the system and method allow a user to gain a greater understanding of the data and to gain new insights into their data. Embodiments of the system and method also allow a user to interactively explore the problem and to navigate different views of data. Many different classifier training and evaluation experiments are run simultaneously and results are obtained. The results are aggregated and visualized across each of the experiments to determine and understand how each example is classified for each different classifier. These results then are summarized in a variety of ways to allow users to obtain a greater understanding of the data both in terms of the individual examples themselves and features associated with the data.


Touch sensitive display apparatus using sensor input

November 2013

·

8 Reads

Described herein is a system that includes a receiver component that receives gesture data from a sensor unit that is coupled to a body of a gloveless user, wherein the gesture data is indicative of a bodily gesture of the user, wherein the bodily gesture comprises movement pertaining to at least one limb of the gloveless user. The system further includes a location determiner component that determines location of the bodily gesture with respect to a touch-sensitive display apparatus. The system also includes a display component that causes the touch-sensitive display apparatus to display an image based at least in part upon the received gesture data and the determined location of the bodily gesture with respect to the touch-sensitive display apparatus.


The sound of touch: On-body touch and gesture sensing based on transdermal ultrasound propagation

October 2013

·

133 Reads

·

100 Citations

Recent work has shown that the body provides an interesting interaction platform. We propose a novel sensing technique based on transdermal low-frequency ultrasound propagation. This technique enables pressure-aware continuous touch sensing as well as arm-grasping hand gestures on the human body. We describe the phenomena we leverage as well as the system that produces ultrasound signals on one part of the body and measures this signal on another. The measured signal varies according to the measurement location, forming distinctive propagation profiles which are useful to infer on-body touch locations and on-body gestures. We also report on a series of experimental studies with 20 participants that characterize the signal, and show robust touch and gesture classification along the forearm.


Citations (84)


... The human body offers a large and always-available surface that can be accessed quickly and accurately without relying on visual feedback, due to proprioception [7]. It serves as a mnemonic frame of reference for associating meanings to different body parts [1] or kinesthetic cues [25]. While previous work, e.g., [8,16,32], propose on-body tapping as a promising technique for interacting with smart mobile devices, it remains unknown how users' motion, e.g., running, impacts this technique. ...

Reference:

Run&Tap: Investigation of On-Body Tapping for Runners
Kinesthetic cues aid spatial memory
  • Citing Conference Paper
  • January 2002

... Research has highlighted that display size and resolution influences a wide range of variables including viewing distance [105,157,158], text legibility [159,160], accommodation [161], asthenopia [162], pupil size [163], musculoskeletal strain [164,165] and visual performance [166], although many of these effects are likely to be device specific. These larger high-resolution displays have been shown to aid productivity [167][168][169][170][171] and improve the ability to share content [172], but such displays typically have increased power demands, device weight/bulk and a requirement for more graphical processing power. ...

With similar visual angles, larger displays improve spatial performance
  • Citing Conference Paper
  • January 2003

... Research has demonstrated, for example, how software can be commonly tailored to male-typical work strategies or use patterns (Beckwith & Burnett, 2004;Cifor & Garcia, 2020), disadvantaging more female-typical use; this can be as direct as voice recognition systems (e.g., in-car) responding better to male voices than female (Carty, 2011), or image search results reinforcing existing gender stereotypes (Kay et al., 2015). These designs can be re-envisioned to be more inclusive, reducing any productivity or use divide (Czerwinski et al., 2002;Grigoreanu et al., 2008;Tan et al., 2003). Similarly, research has shown how existing archetypical gendered interactions (e.g., sexism and gendered expectations) translate to online social platforms, with the platform design playing an important role (Bivens & Haimson, 2016;Dubois et al., 2020), and resulting issues serving as barriers to use (Vashistha et al., 2019). ...

Women go with the (optical) flow
  • Citing Conference Paper
  • January 2003

... Qualitative research may involve a small cohort for an in-depth exploration of individual translation processes. In quantitative projects, despite existing guidelines, calculating an appropriate sample size involves various considerations, like the study's nature, design, the number of conditions, statistical confidence, sensitivity, measurement variability, and meaningful differences (Gergle & Tan, 2014). ...

Reference:

Keylogging
Experimental research in HCI
  • Citing Chapter
  • March 2014

... For example, by employing benevolent deception, the recovery rate of patients can be increased. Research has primarily focused on eliminating selfish deception while overlooking the potential benefits of using deception constructively [15]. This raises the question of whether we should permit AV to disclose benevolent deception information? ...

Benevolent deception in human computer interaction
  • Citing Conference Paper
  • April 2013

... However, the aim of early HBC studies was mostly to develop shielding techniques of electrostatic discharge for safety operations. The idea of making use of HBC only emerged gradually from the last decades, examples are HBCbased communication [27], cooperation detection [28] as well as basic motion detection [29], [30]. Despite such pioneer studies, the question of how this HBC concept could benefit current wearable motion sensing remains unanswered. ...

An ultra-low-power human body motion sensor using static electric field sensing

... Other studies tackling active hand/upper-body pose estimation based on acoustic signals required users to put on wearable devices [20,25,27,46], which is not practical in reallife scenarios. For more practical human sensing, we utilize nonengineered BGM for noninvasive active sensing. ...

The sound of touch: On-body touch and gesture sensing based on transdermal ultrasound propagation
  • Citing Conference Paper
  • October 2013

... As a classic algorithm of machine learning, SVM (support vector machine) is also used. Cohn et al. [10] utilized SVM to map electronic noise to various pose classifications. Nguyen et al. [33] identified the direction of the deformation correctly by SVM. ...

Humantenna: Using the Body as an Antenna for Real-Time Whole-Body Interaction
  • Citing Article
  • May 2012

... Patients' autonomous and self-guided engagement with their health situation, for example, by accessing health data during ongoing healthcare, has been associated with better health outcomes and patient satisfaction [39], including POD [55]. This is further illustrated in Pfeifer Vardoulakis et al. [42]'s study, which provided hospitalized patients with insights into their care via a mobile application and found that patients experienced an increased sense of participation in their care. The authors suggest that this approach holds great promise for reducing patient anxiety, raising awareness, and increasing patient empowerment, along with patients' perceptions of ownership of their care. ...

Using mobile phones to present medical information to hospital patients
  • Citing Article
  • May 2012

... These effects have also been investigated by comparing viewers' responses to content watched on stationary screens of different sizes arguing that smaller screens produce smaller visual angles that can impair engagement with an audio-visual stimulus. Notably, it was found that smaller screens or visual angles negatively affect arousal [29], self-reported presence [21,23,42], the completion of visual tasks [40], and the sensation of reality [20]. Studies of narrative film concluded that screen size affects gaze dispersion on the screen [35] and emotional engagement especially while watching particular types of content, such as scenes depicting human faces [42]. ...

Exploiting the cognitive and social benefits of physically large displays
  • Citing Thesis
  • January 2004