Anton Treskunov

Anton Treskunov

8.34
· PhD in Computer Science
  • About
    Introduction
    I am passionate about different ways to improve people life using modern technology. For this end I employed different technologies: - robotics and computer vision to eliminate tedious eye strained inspection; - computer graphics and software engineering to deal with a zoo of different file formats; - virtual reality for decision making training and psychological rehabilitation; - human computer interaction and pattern recognition to simplify TV control tasks.
    30
    Research items
    1,570
    Reads
    141
    Citations
    Research Experience
    Oct 2009 - Jul 2015
    Research Engineer
    Samsung Research America · User Experience Center
    Mountain View, California, United States
    Description
    Research new ways of interaction with TVs as well as HCI in general. Prototype novel input mechanisms and Smart TV User Interfaces in close cooperation with designers and user researchers. Submit patent applications. Do a Virtual Reality research.
    Aug 2003 - Jun 2009
    Research Scientist
    University of Southern California · Institute for Creative Technologies
    Marina del Rey, California, United States
    Description
    Virtual Reality research and software development; Flatworld and Virtual Iraq projects; work on smart projectors.
    Education
    Sep 1986 - Jun 1993
    Keldysh Institute for Applied Math of Russian Academy of Sciences
    Robot Vision
    Sep 1981 - Jun 1986
    Moscow State University
    Mathematics & Mechanics
    Followers (47)
    View All
    Joann Difede
    Joy Bose
    Arunabha Choudhury
    Yanzi Zhang
    Jeffrey Kim
    Kim Hee Young
    Qiang Gao
    Sungkeun Cho
    Mith Naya
    Vijay Kumar
    Following (16)
    View All
    Ken Graap
    Andrei Sherstyuk
    Albert Rizzo
    Mark Billinghurst
    Henry Fuchs
    Jarrell Pair
    Stefan Johannes Walter Marti
    Thomas D Parsons
    Greg Welch
    Larry F Hodges
    Awards & Achievements (3)
    Feb 2013
    Outstanding Achievement Award from Samsung Research America for contribution to 2013 Samsung TV Smart Remote Control
    Award
    Jun 2008
    Best Medical Application award at the 10th Virtual Reality International Conference, Laval, France for virtual reality exposure therapy treatment for PTSD
    Award
    Nov 2004
    Best Paper award, 24th Army Science Conference
    Award
    Research
    Research Items
    This paper presents a method to estimate hand positions behind a display, while the person's body is in front of the display. This allows for direct and bare hand interaction with virtual objects in an AR setting. As a sensor, we use a time-of-flight range camera, which provides depth information per pixel, but is noisy. While hand tracking has previously been attempted by localizing body parts and fitting a body model into the data, we propose a simpler algorithm. We first remove outliers, and then model the hand as an oriented box whose position and orientation is determined by a PCA algorithm. The calculated hand model is then used in a physics engine based interaction with virtual content.
    Narrow field of view of common Head Mount Displays, coupled with lack of adaptive camera accommodation and vergence make it impossible to view virtual scenes using familiar eye-head-body co-ordination patterns and reflexes. This impediment of natural habits is most noticeable in applications where users are facing multiple tasks, which require frequent switching between viewing modes, from wide range visual search to object examination at close dis-tances. We propose a new technique for proactive control of the virtual camera by utilizing a predator-prey vision metaphor. We de-scribe the technique, the implementation, and preliminary results.
    Posttraumatic Stress Disorder (PTSD) is reported to be caused by traumatic events that are outside the range of usual human experience including (but not limited to) military combat, violent personal assault, being kidnapped or taken hostage and terrorist attacks. Initial data suggests that at least 1 out of 5 Iraq War veterans are exhibiting symptoms of depression, anxiety and PTSD. Virtual Reality (VR) delivered exposure therapy for PTSD has been previously used with reports of positive outcomes. The current paper is a follow-up to a paper presented at IEEE VR2006 and will present the rationale and description of a VR PTSD therapy application (Virtual Iraq) and present the findings from its use with active duty service members since the VR2006 presentation. Virtual Iraq consists of a series of customizable virtual scenarios designed to represent relevant Middle Eastern VR contexts for exposure therapy, including a city and desert road convoy environment. User-centered design feedback needed to iteratively evolve the system was gathered from returning Iraq War veterans in the USA and from a system deployed in Iraq and tested by an Army Combat Stress Control Team. Results from an open clinical trial using Virtual Iraq at the Naval Medical Center-San Diego with 20 treatment completers indicate that 16 no longer met PTSD diagnostic criteria at post-treatment, with only one not maintaining treatment gains at 3 month follow-up.
    Real time computer graphics are limited in that they can only be displayed on projection screens and monitors. Monitors and projection screens cannot be used in live fire training or scenarios in which the displays could be physically damaged by trainees. To address this issue, we have developed projection systems using computer vision based color correction and image processing to project onto non-ideal surfaces such as painted walls, cinder blocks, and concrete floors. These projector-camera systems effectively paint the real world with digital light. Any surface can become an interactive projection screen allowing unprepared spaces to be transformed into an immersive environment. Virtual bullet holes, charring, and cracks can be added to real doors, walls, tables, chairs, cabinets, and windows. Distortion correction algorithms allow positioning of projection devices out of the field of view of trainees and their weapons. This paper describes our motivation and approach for implementing projector-camera systems for use within the FlatWorld wide area mixed reality system.
    The authors describe algorithms and hardware for an automatic output check system for liquid crystal display (LCD) manufacturing. A TV camera and a high-speed video processor with a data flow architecture are presented. The main algorithmic difficulty on the LCD check problem is detection of small defects in real-time. To do this, the algorithm utilizing a priori information was designed. The methods used for functional LCD check do not require any human interference for its application to new LCD types
    Recently a number of TV manufacturers introduced TV remotes with a touchpad which is used for indirect control of TV UI. Users can navigate the UI by moving a finger across the touch pad. However, due to the latency in visual feedback , there is a disconnection between the finger movement on the touchpad and the visual perception in the TV UI, which often causes overshooting. In this paper, we investigate how haptic feedback affects the user experience of the touchpad-based TV remote. We described two haptic prototypes built on the smartphone and Samsung 2013 TV remote respectively. We conducted two user studies with two prototypes to evaluate how the user preference and the user performance been affected. The results show that there is overwhelming support of haptic feedback in terms of subjective user preference, though we didn't find significant difference in performance between with and without haptic feedback conditions.
    Terrain image maps are widely used in 3D Virtual Environments, including games, online social worlds, and Virtual Reality systems, for controlling elevation of ground-bound travelers and other moving objects. By making use of all available color channels in the terrain image, it is possible to encode important information related to travel, such as presence of obstacles, directly into the image. This information can be retrieved in real time, for collision detection and avoidance, at flat cost of accessing pixel values from the image memory. We take this idea of overloading terrain maps even further and introduce time maps, where pixels can also define the rate of time, for each player at given location. In this concept work, we present a general mechanism of encoding the rate of time into a terrain image and discuss a number of applications that may benefit from making time rate location specific. Also, we offer some insights how such space-time maps can be integrated into existing game engines.
    The recent advent of video-based tracking technologies allowed to bring natural head motion to any 3D application, including games. However, video tracking is a CPU-intensive process, which may have a negative impact on game performance. In this work, we examine this impact for different types of 3D content, using a game prototype built with two advanced components, CryENGINE2 platform and faceAPI head tracking system. Our findings indicate that cost of video tracking is negligible. We provide detail on our system implementation and performance analysis. Also, we present a number of new techniques of controlling user avatars in social 3D games, based on head motion.
    Mouse, joystick and keyboard controls in 3D games have long since become a second nature for generations of garners. Recent advances in webcam-based tracking technologies made it easy to bring natural human motions and gestures into play. However, video tracking is a CPU-intensive process, which may have a negative impact on game performance. We measured this impact for different types of 3D content and found it to be minimal, on multi-core platforms. We provide details on implementation and evaluation of our system. Also, we suggest several examples how natural motion can be used in 3D games.
    Natural head motion is the key component in Virtual Reality systems where users are immersed via personal eyewear. The advent of webcam tracking technologies made head motion available on every desktop, for any 3D game, calling for revised methods of handling user head motion in new, non-immersive settings. We present several novel techniques of using head motion in 3D social games. We show that head tracking is not only a useful, but also an enabling technology, that creates new means of interaction between players, makes gameplay more personal and game controls more flexible.
    Narrow field of view of common Head Mount Displays, coupled with lack of adaptive camera accommodation and vergence make it impossible to view virtual scenes using familiar eye-head-body co-ordination patterns and reflexes. This impediment of natural habits is most noticeable in applications where users are facing multiple tasks, which require frequent switching between viewing modes, from wide range visual search to object examination at close dis-tances. We propose a new technique for proactive control of the virtual camera by utilizing a predator-prey vision metaphor. We de-scribe the technique, the implementation, and preliminary results.
    This paper presents a method to estimate hand positions behind a display, while the person's body is in front of the display. This allows for direct and bare hand interaction with virtual objects in an AR setting. As a sensor, we use a time-of-flight range camera, which provides depth information per pixel, but is noisy. While hand tracking has previously been attempted by localizing body parts and fitting a body model into the data, we propose a simpler algorithm. We first remove outliers, and then model the hand as an oriented box whose position and orientation is determined by a PCA algorithm. The calculated hand model is then used in a physics engine based interaction with virtual content.
    We present DRIVE, an interaction method that allows a user to manipulate virtual content by reaching behind a mobile display device such as a cellphone, tablet PC, etc. Unlike prior work that uses front volume as well as front, side, and back surfaces, DRIVE utilizes the back volume of the device. Together with face tracking, out system creates the illusion that the user's hand is co-located with virtual volumetric content.
    The ability to locate, select and interact with objects is fundamental to most Virtual Reality (VR) applications. Recently, it was demonstrated that the virtual hand metaphor, a technique commonly used for these tasks, can also be employed to control the virtual camera, resulting in improved performance and user evaluation in visual search tasks. In this work, we further investigate the effects of hand-assisted viewing on user behavior in immersive virtual environments. We demonstrate that hand-assisted camera control significantly changes the way how people operate their virtual hands, on motor, cognitive, and behavioral levels.
    In medical education, human patient simulators, or manikins, are a well established method of teaching medical skills. The current state of the art manikins are limited in their functions by a fixed number of in-built hardware devices, such as pressure sensors and motor actuators that control the manikin behaviors and responses. In this work, we review several research projects, where applied techniques from the fields of Augmented and Mixed Reality allowed to significantly expand manikin functionality. We will pay special attention to tactile augmentation, and describe in detail a fully functional “touch-enabled” human manikin, developed at SimTiki Medical Simulation Center, University of Hawaii. Also, we will outline possible extensions of the proposed touch-augmented human patient simulator and share our thoughts on the future directions in use of Augmented Reality in medical education.
    Mixing real and virtual components into one consistent environment often involves creating geometry models of physical objects. Traditional approaches include manual modeling by 3D artists or use of dedicated devices. Both approaches require special skills or special hardware and may be costly. We propose a new method for fast semi-automatic 3D geometry acquisition, based upon unconventional use of motion tracking equipment. The proposed method is intended for quick surface prototyping for Virtual, Augmented and Mixed reality applications (VR/AR/MR), where quality of visualization of scanned objects is not required or is of low priority.
    Here we describe a vision of VR games that combine the best features of gaming and VR: large, persistent worlds experienced in photorealistic settings with full immersion. For example, Figure 1 illustrates a hypothetical immersive VR game that could be developed using current technologies, including real-time, cinematic-quality graphics; a panoramic head-mounted display (HMD); and wide-area tracking. We also examine the gap between available VR and gaming technologies, and offer solutions for bridging it.
    Two common limitations of modern Head Mounted Displays (HMD): the narrow eld of view and limited dynamic range, call for rendering techniques that can circumvent or even take advan- tage of these limiting factors. In order to improve visual response from HMDs, we propose a new method of creating various light- ing effects, by using view-dependent control over lighting. Two implemented examples are provided: simulation of a blinding ef- fect in dark environments, and contrast enhancement. The paper is intended for the audience interested in developing HMD-based Virtual Reality applications with improved scene illumination.
    Terrain maps, commonly used for updating elevation values of a moving object (i.e., a traveler), may be conveniently used for detecting and preventing collisions between the traveler and other objects on the scene. For that purpose, we project the geometry of all collidable objects onto the map and store it in a dedicated color channel. Combined with adaptive speed control, this information provides fast and reliable collision-avoidance during travel, independent of scene complexity. We present implementation details of the base system for a Virtual Reality application and discuss a number of extensions.
    Game engines of cinematic quality, broadband networking and ad- vances in Virtual Reality (VR) technologies are setting the stage to allow players to have shared, ìbetter-than-lifeî experiences in on- line virtual worlds. We propose a mechanism of merit-based selec- tion of players, as a solution to the long-standing problem of limited access to VR hardware.
    Two common limitations of modern Head Mounted Displays (HMD): the narrow field of view and limited dynamic range, call for rendering techniques that can circumvent or even take advantage of these factors. We describe a simple practical method of enhancing visual response from HMDs by using view-dependent control over lighting. One example is provided for simulating blinding lights in dark environments.
    Posttraumatic Stress Disorder (PTSD) is reported to be caused by traumatic events that are outside the range of usual human experience including (but not limited to) military combat, violent personal assault, being kidnapped or taken hostage and terrorist attacks. Initial data suggests that at least 1 out of 5 Iraq War veterans are exhibiting symptoms of depression, anxiety and PTSD. Virtual Reality (VR) delivered exposure therapy for PTSD has been previously used with reports of positive outcomes. The current paper is a follow-up to a paper presented at IEEE VR2006 and will present the rationale and description of a VR PTSD therapy application (Virtual Iraq) and present the findings from its use with active duty service members since the VR2006 presentation. Virtual Iraq consists of a series of customizable virtual scenarios designed to represent relevant Middle Eastern VR contexts for exposure therapy, including a city and desert road convoy environment. User-centered design feedback needed to iteratively evolve the system was gathered from returning Iraq War veterans in the USA and from a system deployed in Iraq and tested by an Army Combat Stress Control Team. Results from an open clinical trial using Virtual Iraq at the Naval Medical Center-San Diego with 20 treatment completers indicate that 16 no longer met PTSD diagnostic criteria at post-treatment, with only one not maintaining treatment gains at 3 month follow-up.
    Though often desirable, the integration of real and virtual elements in mixed reality environments can be dicult. We propose a number of techniques to facilitate scene explo- ration and object selection by giving users real instruments as props while implementing their functionality in a virtual part of the environment. Specically , we present a family of tools built upon the idea of using real binoculars for viewing virtual content. This approach matches user expectations with the tool's capabilities enhancing the sense of presence and increasing the depth of interaction between the real and virtual components of the scene. We also discuss possible applications of these tools and the results of our user study. This paper is an extended version of earlier work presented at the 4th International Workshop on the Tangible Space Initiative(5).
    Mixing real and virtual elements into one environment often involves creating geometry models of physical objects. Traditional approaches include manual modeling by 3D artists or use of dedicated devices. Both approaches require special skills or special hardware and may be costly. We propose a new method for fast semi-automatic 3D geometry acquisition, based upon unconventional use of motion tracking equipment. The proposed method is intended for quick surface prototyping for virtual, augmented and mixed reality applications where quality of visualization of objects is not required or is of low priority.
    Though often desirable, the integration of real and virtual elements in mixed reality environments can be difficult. We propose a num-ber of techniques to facilitate scene exploration and object selection by giving users real instruments as props while implementing their functionality in a virtual part of the environment. Specifically, we present a family of tools built upon the idea of using real binoculars for viewing virtual content. This approach matches user expecta-tions with the tool's capabilities enhancing the sense of presence and increasing the depth of interaction between the real and virtual components of the scene. We also discuss possible applications of these tools and the results of our user study.
    Optical sight is a new metaphor for selecting distant objects or precisely pointing at close objects in virtual environments. Op-tical sight combines ray-casting, hand based camera control, and variable zoom into one virtual instrument that can be easily im-plemented for a variety of Virtual, Mixed, and Augmented Reality systems. The optical sight can be modified into a wide family of tools for viewing and selecting objects. Optical sight scales well from desktop environments to fully immersive systems.
    Real time computer graphics are limited in that they can only be displayed on projection screens and monitors. Monitors and projection screens cannot be used in live fire training or scenarios in which the displays could be physically damaged by trainees. To address this issue, we have developed projection systems using computer vision based color correction and image processing to project onto non-ideal surfaces such as painted walls, cinder blocks, and concrete floors. These projector-camera systems effectively paint the real world with digital light. Any surface can become an interactive projection screen allowing unprepared spaces to be transformed into an immersive environment. Virtual bullet holes, charring, and cracks can be added to real doors, walls, tables, chairs, cabinets, and windows. Distortion correction algorithms allow positioning of projection devices out of the field of view of trainees and their weapons. This paper describes our motivation and approach for implementing projector-camera systems for use within the FlatWorld wide area mixed reality system.
    Post Traumatic Stress Disorder (PTSD) is reported to be caused by traumatic events that are outside the range of usual human experience including (but not limited to) military combat, violent personal assault, being kidnapped or taken hostage and terrorist attacks. Initial data suggests that at least 1 out of 6 Iraq War veterans are exhibiting symptoms of depression, anxiety and PTSD. Virtual Reality (VR) delivered exposure therapy for PTSD has been used with reports of positive outcomes. The aim of the current paper is to present the rationale, technical specifications, application features and user-centered design process for the development of a Virtual Iraq PTSD VR therapy application. The VR treatment environment is being created via the recycling of virtual graphic assets that were initially built for the U.S. Army-funded combat tactical simulation scenario and commercially successful X-Box game, Full Spectrum Warrior, in addition to other available and newly created assets. Thus far we have created a series of customizable virtual scenarios designed to represent relevant contexts for exposure therapy to be conducted in VR, including a city and desert road convoy environment. User-centered design feedback needed to iteratively evolve the system was gathered from returning Iraq War veterans in the USA and from a system in Iraq tested by an Army Combat Stress Control Team. Clinical trials are currently underway at Camp Pendleton and at the San Diego Naval Medical Center. Other sites are preparing to use the application for a variety of PTSD and VR research purposes.
    Post Traumatic Stress Disorder (PTSD) is reported to be caused by traumatic events that are outside the range of usual human experiences including (but not limited to) military combat, violent personal assault, being kidnapped or taken hostage and terrorist attacks. Initial data suggests that 1 out of 6 Iraq War veterans are exhibiting symptoms of depression, anxiety and PTSD. Virtual Reality (VR) exposure treatment has been used in previous treatments of PTSD patients with reports of positive outcomes. The aim of the current paper is to present the rationale, technical specifications, application features and user-centered design process for the development of a Virtual Iraq PTSD VR therapy application. The VR treatment environment is being created via the recycling of virtual graphic assets that were initially built for the U.S. Army-funded combat tactical simulation scenario and commercially successful X-Box game, Full Spectrum Warrior, in addition to other available and newly created assets. Thus far we have created a series of customizable virtual scenarios designed to represent relevant contexts for exposure therapy to be conducted in VR, including a city and desert road convoy environment. User-Centered tests with the application are currently underway at the Naval Medical Center-San Diego and within an Army Combat Stress Control Team in Iraq with clinical trials scheduled to commence in February 2006.
    Over the past four years, the FlatWorld project [1] at the University of Southern California Institute for Creative Technologies has exploited ad hoc immersive display techniques to prototype virtual reality education and training applications. While our approach is related to traditional immersive projection systems such as the CAVE [2], our work draws extensively upon techniques widely used in Hollywood sets and theme parks. Our first display system, initially prototyped in 2001, enables wide area virtual environments in which participants can maneuver through simulated rooms, buildings, or streets. In 2004, we expanded our work by experimenting with transparent projection screens. To date, we have used this display technique for presenting life size interactive characters with a pseudo-holographic appearance.
    Motion Picture sets are traditionally built using decorated modular wall components called "flats". The FlatWorld project (Pair et al., 2003) at the University of Southern California Institute for Creative Technologies merges this practice with immersive technology by creating a system of displays coupled with physical props which can be scaled to simulate entire buildings and streets. A single room prototype FlatWorld system was developed in 2001. The software developed for this prototype was not scalable beyond the simulation of a single room environment. In 2003, the FlatWorld Simulation Control Architecture (FSCA) was developed to support multiple digital flats in arbitrary configurations. The FSCA facilitates digital flat training scenarios which can be scaled from the simulation of a single room up to a complete city block. The architecture's flexibility allows it to easily interface with a variety of 3D graphics engines and display devices.
    The authors describe algorithms and hardware for an automatic output check system for liquid crystal display (LCD) manufacturing. A TV camera and a high-speed video processor with a data flow architecture are presented. The main algorithmic difficulty on the LCD check problem is detection of small defects in real-time. To do this, the algorithm utilizing a priori information was designed. The methods used for functional LCD check do not require any human interference for its application to new LCD types
    Post Traumatic Stress Disorder (PTSD) is reported to be caused by traumatic events that are outside the range of usual human experience including (but not limited to) military combat, violent personal assault, being kidnapped or taken hostage and terrorist attacks. Initial data suggests that at least 1 out of 5 Iraq War veterans are exhibiting symptoms of depression, anxiety and PTSD. Virtual Reality (VR) delivered exposure therapy for PTSD has been previously used with reports of positive outcomes. The current paper is a follow-up to a paper presented at IEEE VR2006 and will present the rationale and description of a VR PTSD therapy application (Virtual Iraq) and present the findings from its use with active duty service members since the VR2006 presentation. Virtual Iraq consists of a series of customizable virtual scenarios designed to represent relevant Middle Eastern VR contexts for exposure therapy, including a city and desert road convoy environment. User-centered design feedback needed to iteratively evolve the system was gathered from returning Iraq War veterans in the USA and from a system deployed in Iraq and tested by an Army Combat Stress Control Team. Results from an open clinical trial using Virtual Iraq at the Naval Medical Center-San Diego with 20 treatment completers indicate that 16 no longer met PTSD diagnostic criteria at post-treatment, with only one not maintaining treatment gains at 3 month follow-up.
    Top co-authors
    View All