Ari Shapiro

Ari Shapiro
University of Southern California | USC · Institute for Creative Technologies

PhD

About

77
Publications
29,255
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,694
Citations
Citations since 2017
18 Research Items
1096 Citations
2017201820192020202120222023050100150200
2017201820192020202120222023050100150200
2017201820192020202120222023050100150200
2017201820192020202120222023050100150200
Introduction
Skills and Expertise
Additional affiliations
April 2010 - December 2015
Institute For Creative Technologies
Position
  • Researcher
December 2007 - March 2010
Rhythm & Hues Studios
Position
  • Graphics Scientist
November 2006 - December 2007
Industrial Light & Magic
Position
  • R & D Engineer
Education
August 2001 - June 2007
University of California, Los Angeles
Field of study
  • Computer Science
August 2001 - June 2002
University of California, Los Angeles
Field of study
  • Computer Science
September 1994 - June 1996
University of California, Santa Cruz
Field of study
  • Computer Science

Publications

Publications (77)
Conference Paper
UBeBot allows a mobile user to create a 3D avatar of themselves using a photo, as well as dress and style the avatar. Users then record their voice, allowing the avatar to act our the content of the utterance, including lip sync, facial expressions, gestures and other body language. This animated performance is generated automatically by analyzing...
Conference Paper
Full-text available
The child developmental period of ages 6-12 months marks a widely understood "critical period" for healthy language learning, during which, failure to receive exposure to language can place babies at risk for language and reading problems spanning life. Deaf babies constitute one vulnerable population as they can experience dramatically reduced or...
Preprint
Full-text available
Near-range portrait photographs often contain perspective distortion artifacts that bias human perception and challenge both facial recognition and reconstruction techniques. We present the first deep learning based approach to remove such artifacts from unconstrained portraits. In contrast to the previous state-of-the-art approach, our method hand...
Conference Paper
Full-text available
Volumetric video can be used in virtual and augmented reality applications to show detailed animated performances by human actors. In this paper, we describe a volumetric capture system based on a photogrammetry cage with unsynchronized, low-cost cameras which is able to generate high-quality geometric data for animated avatars. This approach requi...
Conference Paper
Digital doppelgangers are virtual humans that highly resemble the real self but behave independently. Digital doppelgangers possess great potential to serve as powerful models for behavioral change. An emerging technology, the Rapid Avatar Capture and Simulation (RACAS) system, enables low-cost and high-speed scanning of a human user and creation o...
Conference Paper
Full-text available
Children with insufficient exposure to language during critical developmental periods in infancy are at risk for cognitive, language, and social deficits [55]. This is especially difficult for deaf infants, as more than 90% are born to hearing parents with little sign language experience [48]. We created an integrated multi-agent system involving a...
Conference Paper
A crucial aspect of virtual gaming experiences is the avatar: the player's virtual self-representation. While research has demonstrated benefits to using self-similar avatars in some virtual experiences, such avatars sometimes produce a more negative experience for women. To help researchers and game designers assess the cost-benefit tradeoffs of s...
Conference Paper
Full-text available
We present a novel algorithm for generating virtual avatars which move like the represented human subject, using inexpensive sensors & commodity hardware. Our algorithm is based on a perceptual study that evaluates self-recognition and similarity of gaits rendered on virtual avatars. We identify discriminatory features of human gait and propose a d...
Conference Paper
Full-text available
Human speech production requires the dynamic regulation of air through the vocal system. While virtual character systems commonly are capable of speech output, they rarely take breathing during speaking – speech breathing – into account. We believe that integrating dynamic speech breathing systems in virtual characters can significantly contribute...
Conference Paper
Digital doppelgangers possess great potential to serve as powerful models for behavioral change. An emerging technology, the Rapid Avatar Capture and Simulation (RACAS), enables low-cost and high-speed scanning of a human user and creation of a digital doppelganger that is a fully animatable virtual 3D model of the user. We designed a virtual role-...
Conference Paper
We demonstrate a system that can generate a photorealistic, interactive 3D character from a human subject that is capable of movement, emotion, speech and gesture in less than 20 minutes without the need for 3D artist intervention or specialized technical knowledge through a near automatic process. Our method uses mostly commodity- or of-the-shelf...
Article
Current 3D capture and modeling technology can rapidly generate highly photo-realistic 3D avatars of human subjects. However, while the avatars look like their human counterparts, their movements often do not mimic their own due to existing challenges in accurate motion capture and re-targeting. A better understanding of factors that influence the...
Article
We demonstrate a system that can generate a photorealistic, interactive 3-D character from a human subject that is capable of movement, emotion, speech, and gesture in less than 20 min without the need for 3-D artist intervention or specialized technical knowledge through a near automatic process. Our method uses mostly commodity or off-the-shelf h...
Conference Paper
Full-text available
We present a novel interactive approach, PedVR, to generate plausible behaviors for a large number of virtual humans, and to enable natural interaction between the real user and virtual agents. Our formulation is based on a coupled approach that combines a 2D multi-agent navigation algorithm with 3D human motion synthesis. The coupling can result i...
Conference Paper
Recent advances in scanning technology have enabled the widespread capture of 3D character models based on human subjects. Intuition suggests that, with these new capabilities to create avatars that look like their users, every player should have his or her own avatar to play video games or simulations. We explicitly test the impact of having one’s...
Conference Paper
This study explores presentation techniques for a chat-based virtual human that communicates engagingly with users. Interactions with the virtual human occur via a smartphone outside of the lab in natural settings. Our work compares the responses of users who interact with an animated virtual character as opposed to a real human video character cap...
Conference Paper
Full-text available
This study investigates presentation techniques for a chatbased virtual human that communicates engagingly with users via a smartphone outside of the lab in natural settings. Our work compares the responses of users who interact with an animated 3D virtual character as opposed to a real human video character capable of displaying backchannel behavi...
Conference Paper
Recent advances in scanning technology have enabled the widespread capture of 3D character models based on human subjects. Intuition suggests that, with these new capabilities to create avatars that look like their users, every player should have his or her own avatar to play videogames or simulations. We explicitly test the impact of having one’s...
Conference Paper
Virtual audiences are used for training public speaking and mitigating anxiety related to it. However, research has been scarce on studying how virtual audiences are perceived and which non-verbal behaviors should be used to make such an audience appear in particular states, such as boredom or engagement. Recently, crowdsourcing methods have been p...
Conference Paper
Full-text available
Creating and animating realistic 3D human faces is an important element of virtual reality, video games, and other areas that involve interactive 3D graphics. In this paper, we propose a system to generate photorealistic 3D blendshape-based face models automatically using only a single consumer RGB-D sensor. The capture and processing requires no a...
Conference Paper
Full-text available
Good public speaking skills convey strong and effective communication, which is critical in many professions and used in everyday life. The ability to speak publicly requires a lot of training and practice. Recent technological developments enable new approaches for public speaking training that allow users to practice in a safe and engaging enviro...
Conference Paper
Full-text available
In this work we present a complete methodology for robust authoring of AR virtual characters powered from a versatile character animation framework (Smartbody), using only mobile devices. We can author, fully augment with life-size, animated, geometrically accurately registered virtual characters into any open space in less than 1 minute with only...
Conference Paper
In this study, we are interested in exploring whether people would talk with 3D animated virtual humans using a smartphone for a longer amount of time as a sign of feeling rapport [5], compared to non-animated or audio-only characters in everyday life. Based on previous studies [2, 7, 10], users prefer animated characters in emotionally engaged int...
Conference Paper
Full-text available
We describe an authoring framework for developing virtual humans on mobile applications. The framework abstracts many elements needed for virtual human generation and interaction, such as the rapid development of nonverbal behavior, lip syncing to speech, dialogue management, access to speech transcription services, and access to mobile sensors suc...
Conference Paper
This project puts the capability of producing a photorealistic face into the hands of nearly anyone, without an expensive rig, special hardware, or 3D expertise. Using a single commodity depth sensor (Intel RealSense) and a laptop computer, the research team captures several scans of a single face with different expressions. From those scans, a nea...
Conference Paper
Full-text available
Creating and animating a realistic 3D human face is an important task in computer graphics. The capability of capturing the 3D face of a human subject and reanimate it quickly will find many applications in games, training simulations, and interactive 3D graphics. We demonstrate a system to capture photorealistic 3D faces and generate the blendshap...
Conference Paper
Full-text available
Figure 1: Our avatar generation system allows the user to reshape and resize an input body scan according to human proportions. The reshaped scans are then automatically rigged into skinned virtual characters, which can be animated in an interactive virtual environment. Abstract 3D scans of human figures have become widely available through online...
Conference Paper
Full-text available
Creating and animating a realistic 3D human face has been an important task in computer graphics. The capability of capturing the 3D face of a human subject and reanimate it quickly will find many applications in games, training simulations, and interactive 3D graphics. In this paper, we propose a system to capture photorealistic 3D faces and gener...
Article
Full-text available
This study explores presentation techniques for a 3D animated chat-based virtual human that communicates engagingly with users. Interactions with the virtual human occur via a smartphone outside of the lab in natural settings. Our work compares the responses of users who interact with no image or a static image of a virtual character as opposed to...
Conference Paper
Recent advances in scanning technology have enabled the widespread capture of 3D character models based on human subjects. However, in order to generate a recognizable 3D avatar, the movement and behavior of the human subject should be captured and replicated as well. We present a method of generating a 3D model from a scan, as well as a method to...
Conference Paper
We present preliminary results of a framework that can synthesize parameterized locomotion with controllable quality from simple deformations over a single step cycle. Our approach enforces feet constraints per phase in order to appropriately perform motion deformation operations, resulting in a generative and controllable model that maintains the...
Article
This demo captures a 3D model of a human subject using a single Microsoft Kinect, constructs a realistic 3D model from that subject, then animates and simulates it with a variety of different behaviors, all within a matter of minutes.
Conference Paper
Full-text available
Cloth manipulation is a common action in humans that many animated characters in interactive simulations are not able to perform due to its complexity. In this paper we focus on dressing-up, a common action involving cloth. We identify the steps required to perform the task and describe the systems responsible for each of them. Our results show a c...
Article
We demonstrate a method of acquiring a 3D model of a human using commodity scanning hardware and then controlling that 3D figure in a simulated environment in only a few minutes. The model acquisition requires four static poses taken at 90° angles relative to each other. The 3D model is then given a skeleton and smooth binding information necessary...
Conference Paper
Full-text available
The USC Institute for Creative Technologies will demonstrate a pipline for automatic reconstruction and animation of lifelike 3D avatars acquired by rotating the user's body in front of a single Microsoft Kinect sensor. Based on a fusion of state-of-the-art techniques in computer vision, graphics, and animation, this approach can produce a fully ri...
Conference Paper
We demonstrate Ally -- a prototype interface for a consumer-level medical diagnostic device. It is an interactive virtual character, -- Virtual Human (VH), -- that listens to user's concern, collects and processes sensor data, offers advice, guides the user through a self-administered medical tests, and answers the user's questions. The primary foc...
Article
Humanoid three-dimensional (3D) models can be easily acquired through various sources, including through online marketplaces. The use of such models within a game or simulation environment requires human input and intervention in order to associate such a model with a relevant set of motions and control mechanisms. In this paper, we demonstrate a p...
Article
Full-text available
We have developed an interactive virtual audience platform for public speaking training. Users' public speaking behavior is automatically analyzed using audiovisual sensors. The virtual characters display indirect feedback depending on user's behavior descriptors correlated with public speaking performance. We used the system to collect a dataset o...
Conference Paper
Full-text available
We demonstrate a lip animation (lip sync) algorithm for real-time applications that can be used to generate synchronized facial movements with audio generated from natural speech or a text-to-speech engine. Our method requires an animator to construct animations using a canonical set of visemes for all pairwise combinations of a reduced phoneme set...
Conference Paper
Full-text available
Public speaking performances are not only characterized by the presentation of the content, but also by the presenters’ nonverbal behavior, such as gestures, tone of voice, vocal variety, and facial expressions. Within this work, we seek to identify automatic nonverbal behavior descriptors that correlate with expert-assessments of behaviors charact...
Conference Paper
Full-text available
While virtual humans are proven tools for training, education and research, they are far from realizing their full potential. Advances are needed in individual capabilities, such as character animation and speech synthesis, but perhaps more importantly, fundamental questions remain as to how best to integrate these capabilities into a single framew...
Article
Full-text available
We demonstrate a method for generating a 3D virtual character per- formance from the audio signal by inferring the acoustic and se- mantic properties of the utterance. Through a prosodic analysis of the acoustic signal, we perform an analysis for stress and pitch, re- late it to the spoken words and identify the agitation state. Our rule-based syst...
Conference Paper
Full-text available
Previsualization tools are used to obtain a preliminary but rough version of a film, television or other production. Used for both liveaction and animated films, they allow a director to set up camera angles, arrange scenes, dialogue, and other scene elements without the expense of paying live actors, constructing physical sets, or other related pr...
Conference Paper
Full-text available
We demonstrate a method for generating a 3D virtual character performance from the audio signal by inferring the acoustic and semantic properties of the utterance. Through a prosodic analysis of the acoustic signal, we perform an analysis for stress and pitch, relate it to the spoken words and identify the agitation state. Our rule-based system per...
Conference Paper
Synchronizing the lip and mouth movements naturally along with animation is an important part of convincing 3D character performance. We present a simple, portable and editable lip-synchronization method that works for multiple languages, requires no machine learning, can be constructed by a skilled animator, runs in real time, and can be personali...
Conference Paper
Full-text available
Motion blending is a widely used technique for character animation. The main idea is to blend similar motion examples according to blending weights, in order to synthesize new motions parameterizing high level characteristics of interest. We present in this paper an in-depth analysis and comparison of four motion blending techniques: Barycentric in...
Conference Paper
Full-text available
Humanoid 3D models can be easily acquired through various sources, including online. The use of such models within a game or simulation environment requires human input and intervention in order to associate such a model with a relevant set of motions and control mechanisms. In this paper, we demonstrate a pipeline where humanoid 3D models can be i...
Article
Full-text available
We synthesize natural-looking locomotion, reaching and grasping for a virtual character in order to accomplish a wide range of movement and manipulation tasks in real time. Our virtual characters can move while avoiding obstacles, as well as manipulate arbitrarily shaped objects, regardless of height, location or placement in a virtual environment....
Conference Paper
Full-text available
We describe a system for animating virtual characters that encompasses many important aspects of character modeling for simulations and games. These include locomotion, facial animation, speech synthesis, reaching/grabbing, and various automated non-verbal behaviors, such as nodding, gesturing and eye saccades. Our system implements aspects of char...
Conference Paper
BML realizers are complex software modules that implement a standardized interface -the BML specication language- to steer the behavior of a virtual human. We aim to promote and test the compliance of realizers that implement this interface. To this end we contribute a corpus of example BML scripts and a tool called RealizerTester that can be used...
Article
Full-text available
To significantly improve the visual quality of certain types of animated 3D character motion, a proposed graphics system infers physical properties and corrects the results by using dynamics. The system visualizes these physical characteristics and provides information not normally available to traditional 3D animators, such as the center of mass,...
Conference Paper
Full-text available
Physical realism is an important aspect of producing convincing 3D character animation. This is particularly true for live-action visual effects where animated characters occupy the same scene as the live actors. In such a scenario, a virtual character's movements must visually match the behavior and movements of the live environment, else the disc...
Article
We synchronize cameras and analog lighting with high speed projectors. Radiometric compensation allows displaying flexible blue screens on arbitrary real world surfaces. A fast temporal multiplexing of coded projection and flash illumination enables ...
Article
Full-text available
We introduce a toolkit for creating dynamic controllers for articulated characters under physical simulation. Our toolkit allows users to create dynamic controllers for interactive or offline use through a combination of both visual and scripting tools. Users can design controllers by specifying keyframe poses, using a high-level scripting language...
Conference Paper
Editing recorded motions to make them suitable for different sets of environmental constraints is a general and difcult open problem. In this paper we solve a signicant part of this problem by mod- ifying full-body motions with an interactive randomized motion planner. Our method is able to synthesize collision-free motions for specied linkages of...
Conference Paper
Full-text available
Dynamic simulation is a promising complement to kinematic motion synthesis, pa rticularly in cases where sim- ulated characters need to respond to unpredictable interactions. Moving b eyond simple rag-doll effects, though, requires dynamic control. The main issue with dynamic control is that there are no standardized techniques that allow an animat...
Conference Paper
Full-text available
ABSTRACT We propose a novel method,for interactive editing of motion data based on motion decomposition. Our method,employs Independent Component,Analysis (ICA) to separate motion,data into visually meaningful,components,called style components.,The user then interactively identies,suitable style components,and manipulates them based on a proposed...
Article
Editing recorded motions to make them suitable for different sets of environmental constraints is a general and difficult open problem. In this paper we solve a significant part of this problem by modifying full-body motions with an interactive randomized motion planner. Our method is able to plan motions for specified linkages of an an- imated cha...
Conference Paper
Full-text available
We introduce the Dynamic Animation and Control Environment (DANCE) as a publicly available simulation platform for research and teaching. DANCE is an open and extensible simulation framework and rapid prototyping environment for computer animation. The main focus of the DANCE platform is the development of physically-based controllers for articulat...
Article
Full-text available
We provide an interface for controlling humanoid characters un-der physical simulation. This interface permits a user to control the character at a variety of different semantic levels. The character can be instructed at a high level, such as by issuing a walk command, or at a low level, such as by specifying poses with interactive keyframe selecti...
Conference Paper
Ecce Homology, a physically interactive new-media work, visualizes genetic data as calligraphic forms. A novel computervision based user interface allows multiple participants, through their movement in the installation space, to select genes ...
Conference Paper
Full-text available
Our system can switch between kinematic animation and physi-cal simulation for any number of characters in the environment and determines which effects will be placed on which characters. For example, it may be desirable to fully animate one character hitting another by motion captured data and only simulate the impact of the force of the hit on th...