Conference PaperPDF Available

A Multi-resolution Approach for Adapting Close Character Interaction

Authors:

Abstract and Figures

Synthesizing close interactions such as dancing and fighting be-tween characters is a challenging problem in computer animation. While encouraging results are presented in [Ho et al. 2010], the high computation cost makes the method unsuitable for interactive motion editing and synthesis. In this paper, we propose an efficient multiresolution approach in the temporal domain for editing and adapting close character interactions based on the Interaction Mesh framework. In particular, we divide the original large spacetime optimization problem into multiple smaller problems such that the user can observe the adapted motion while playing-back the move-ments during run-time. Our approach is highly parallelizable, and achieves high performance by making use of multi-core architec-tures. The method can be applied to a wide range of applications including motion editing systems for animators and motion retar-geting systems for humanoid robots.
Content may be subject to copyright.
A preview of the PDF is not available
... New motions can be synthesized by connecting boundaries of multiple patches, thus creating multiple interactions that were not performed together in the original data. Methods for adapting existing interactions to new environments and characters have also been studied [Al-Asqhar et al. 2013;Ho et al. 2014Ho et al. , 2010Jin et al. 2018;Kim et al. 2021Kim et al. , 2014Kim et al. , 2009. The key idea is to define an interaction descriptor that encodes the spatial and temporal relationship, then to edit the motions while minimizing the semantic difference between the original motion and the edited motions where the difference is measured by the descriptor. ...
Preprint
Full-text available
We present a method for reproducing complex multi-character interactions for physically simulated humanoid characters using deep reinforcement learning. Our method learns control policies for characters that imitate not only individual motions, but also the interactions between characters, while maintaining balance and matching the complexity of reference data. Our approach uses a novel reward formulation based on an interaction graph that measures distances between pairs of interaction landmarks. This reward encourages control policies to efficiently imitate the character's motion while preserving the spatial relationships of the interactions in the reference motion. We evaluate our method on a variety of activities, from simple interactions such as a high-five greeting to more complex interactions such as gymnastic exercises, Salsa dancing, and box carrying and throwing. This approach can be used to ``clean-up'' existing motion capture data to produce physically plausible interactions or to retarget motion to new characters with different sizes, kinematics or morphologies while maintaining the interactions in the original data.
... These methods aim at transferring the topology of the body segments of the source motion to the target character, while using generalized inverse kinematics to solve all the corresponding constraints. This idea of modeling the topology between body segments has been extended by introducing an interaction mesh [Ho et al. 2010[Ho et al. , 2014. Recent works introduced the concept of egocentric plane to ensure that the instantaneous separating plane between each pair of body parts is transferred between the source and target motion [Molla et al. 2018]. ...
Conference Paper
Retargeting a motion from a source to a target character is an important problem in computer animation, as it allows to reuse existing rigged databases or transfer motion capture to virtual characters.Surface based pose transfer is a promising approach to avoid the trial-and-error process when controlling the joint angles. The main contribution of this paper is to investigate whether shape transfer instead of pose transfer would better preserve the original con-textual meaning of the source pose. To this end, we propose an optimization-based method to deform the source shape+pose using three main energy functions: similarity to the target shape, body part volume preservation, and collision management (preserve existing contacts and prevent penetrations). The results show that our method is able to retarget complex poses, including several contacts, to very different morphologies. In particular, we introduce new contacts that are linked to the change in morphology, and which would be difficult to obtain with previous works based on pose transfer that aim at distance preservation between body parts. These preliminary results are encouraging and open several perspectives, such as decreasing computation time, and better understanding how to model pose and shape constraints.
... Recent research efforts have also proposed attractive methods based on topological coordinates for the (offline) retargeting of complex multi-agents interactions. However these methods are either costly [18] or not suited for online performance animation [19]. Likewise, techniques allowing the deformation transfer between surface meshes are also still far from realtime [20], [21]. ...
Article
Full-text available
The relative location of human body parts often materializes the semantics of on-going actions, intentions and even emotions expressed, or performed, by a human being. However, traditional methods of performance animation fail to correctly and automatically map the semantics of performer postures involving self-body contacts onto characters with different sizes and proportions. Our method proposes an egocentric normalization of the body-part relative distances to preserve the consistency of self contacts for a large variety of human-like target characters. Egocentric coordinates are character independent and encode the whole posture space, i.e., it ensures the continuity of the motion with and without self-contacts. We can transfer classes of complex postures involving multiple interacting limb segments by preserving their spatial order without depending on temporal coherence. The mapping process exploits a low-cost constraint relaxation technique relying on analytic inverse kinematics; thus, we can achieve online performance animation. We demonstrate our approach on a variety of characters and compare it with the state of the art in online retargeting with a user study. Overall, our method performs better than the state of the art, especially when the proportions of the animated character deviates from those of the performer.
... Finally, position constraints should prevent body overlap or other appearances that do not conform to the objective world. Researchers have proposed topology coordinates [14,15] and interaction mesh [13,16] to solve these problems in close interaction [6,17,47]. In our work, the proposed model mainly solves the problems of stability and human dynamics of the generated animation as well as adaptability with various interactions. ...
Article
Full-text available
In this paper, we propose a generative recurrent model for human-character interaction. Our model is an encoder-recurrent-decoder network. The recurrent network is composed by multiple layers of long short-term memory (LSTM) and is incorporated with an encoder network and a decoder network before and after the recurrent network. With the proposed model, the virtual character’s animation is generated on the fly while it interacts with the human player. The coming animation of the character is automatically generated based on the history motion data of both itself and its opponent. We evaluated our model based on both public motion capture databases and our own recorded motion data. Experimental results demonstrate that the LSTM layers can help the character learn a long history of human dynamics to animate itself. In addition, the encoder–decoder networks can significantly improve the stability of the generated animation. This method can automatically animate a virtual character responding to a human player.
Article
Full-text available
Creating realistic characters that can react to the users’ or another character’s movement can benefit computer graphics, games and virtual reality hugely. However, synthesizing such reactive motions in human-human interactions is a challenging task due to the many different ways two humans can interact. While there are a number of successful researches in adapting the generative adversarial network (GAN) in synthesizing single human actions, there are very few on modelling human-human interactions. In this paper, we propose a semi-supervised GAN system that synthesizes the reactive motion of a character given the active motion from another character. Our key insights are two-fold. First, to effectively encode the complicated spatial–temporal information of a human motion, we empower the generator with a part-based long short-term memory (LSTM) module, such that the temporal movement of different limbs can be effectively modelled. We further include an attention module such that the temporal significance of the interaction can be learned, which enhances the temporal alignment of the active-reactive motion pair. Second, as the reactive motion of different types of interactions can be significantly different, we introduce a discriminator that not only tells if the generated movement is realistic or not, but also tells the class label of the interaction. This allows the use of such labels in supervising the training of the generator. We experiment with the SBU and the HHOI datasets. The high quality of the synthetic motion demonstrates the effective design of our generator, and the discriminability of the synthesis also demonstrates the strength of our discriminator.
Article
We present an automatic method that allows to retarget poses from a source to a target character by transferring the shape of the target character onto the desired pose of the source character. By considering shape instead of pose transfer our method allows to better preserve the contextual meaning of the source pose, typically contacts between body parts, than pose-based strategies. To this end, we propose an optimization-based method to deform the source shape in the desired pose using three main energy functions: similarity to the target shape, body part volume preservation, and collision management to preserve existing contacts and prevent penetrations. The results show that our method allows to retarget complex poses with several contacts to different morphologies, and is even able to create new contacts when morphology changes require them, such as increases in body size. To demonstrate the robustness of our approach to different types of shapes, we successfully apply it to basic and dressed human characters as well as wild animal models, without the need to adjust parameters.
Chapter
Unlike single-character motion retargeting, multi-character motion retargeting (MCMR) algorithms should be able to retarget each character’s motion correcly while maintaining the interaction between them. Existing MCMR solutions mainly focus on small scale changes between interacting characters. However, many retargeting applications require large-scale transformations. In this paper, we propose a new algorithm for large-scale MCMR. We build on the idea of interaction meshes, which are structures representing the spatial relationship among characters. We introduce a new distance-based interaction mesh that embodies the relationship between characters more accurately by prioritizing local connections over global ones. We also introduce a stiffness weight for each skeletal joint in our mesh deformation term, which defines how undesirable it is for the interaction mesh to deform around that joint. This parameter increases the adaptability of our algorithm for large-scale transformations and reduces optimization time considerably. We compare the performance of our algorithm with current state-of-the-art MCMR solution for several motion sequences under four different scenarios. Our results show that our method not only improves the quality of retargeting, but also significantly reduces computation time.
Conference Paper
In this paper, we present a novel inverse kinemat-ics (IK) method based on the as-rigid-as possible (ARAP) mesh deformation technique. We can interactively and efficiently edit the pose of a character by deforming the mesh geometry embedded into the character model. Complex spatial relationships between the joint locations of the character model is formulated as linear constraints based on the mesh structure to improve the computational efficiency of the IK algorithm compared to previous numerical IK methods. After deforming the mesh obtained from a character pose, we apply a hybrid IK algorithm to closely match the deformed mesh while limiting all joint angles within a reasonable range. We experimentally show that the proposed method works well while retaining the user-defined constraints.
Article
Interaction meshes are a promising approach for generating natural behaviors of virtual characters during ongoing user interactions. In this paper, we propose several extensions to the interaction mesh approach based on statistical analyses of the underlying example interactions. By applying principal component analysis and correlation analysis in addition to joint distance calculations, both the interaction mesh topology and the constraints used for mesh optimization can be generated in an automated fashion that accounts for the spatial and temporal contexts of the interaction. Copyright © 2015 John Wiley & Sons, Ltd.
Conference Paper
Full-text available
This paper presents a new method to synthesize full body motion for controlling humanoid robots in highly constrained environments. Given a reference motion of the robot and the corresponding environment configuration, the spatial relationships between the robot body parts and the environment objects are extracted as a representation called the Interaction Mesh. Such a representation is then used in adapting the reference motion to an altered environment. By preserving the spatial relationships while satisfying physical constraints, collision-free and well balanced motions can be generated automatically and efficiently. Experimental results show that the proposed method can adapt different full body motions in significantly modified environments. Our method can be applied in precise robotic controls under complicated environments, such as rescue robots in accident scenes and searching robots in highly constrained spaces.
Conference Paper
Full-text available
This paper proposes a method for key-frame selection of captured motion data. In many cases, it is desirable to obtain a compact representation of the human motion. Key-framing is often used to express CG animation with a set of frames. In general, the animation is described by a set of curves that give the value of the rotation of all joints in each frame. Our method automatically detects the key-frames in captured motion data by using frame decimation. We decimate less important frames one by one, and then rank them by their importance. Our method has an embedded property, that is, all the frames are ranked by their importance, and thus users can specify any number of keyframes from one data set. We demonstrate the validity of our method in the experimental section by several typical motions such as walking and throwing
Conference Paper
Full-text available
We present a method to decompose an arbitrary 3D piecewise linear complex (PLC) into a constrained Delaunay tetrahedralization (CDT). It successfully resolves the problem of non-existence of a CDT by updating the input PLC into another PLC which is topologically and geometrically equivalent to the original one and does have a CDT. Based on a strong CDT existence condition, the redefinition is done by a segment splitting and vertex perturbation. Once the CDT exists, a practically fast cavity retetrahedralization algorithm recovers the missing facets. This method has been implemented and tested through various examples. In practice, it behaves rather robust and efficient for relatively complicated 3D domains.
Article
This article presents a new framework for synthesizing motion of a virtual character in response to the actions performed by a user-controlled character in real time. In particular, the proposed method can handle scenes in which the characters are closely interacting with each other such as those in partner dancing and fighting. In such interactions, coordinating the virtual characters with the human player automatically is extremely difficult because the system has to predict the intention of the player character. In addition, the style variations from different users affect the accuracy in recognizing the movements of the player character when determining the responses of the virtual character. To solve these problems, our framework makes use of the spatial relationship-based representation of the body parts called interaction mesh, which has been proven effective for motion adaptation. The method is computationally efficient, enabling real-time character control for interactive applications. We demonstrate its effectiveness and versatility in synthesizing a wide variety of motions with close interactions.
Conference Paper
This paper presents an interactive motion adaptation scheme for close interactions between skeletal characters and mesh structures, such as moving through restricted environments, and manipulating objects. This is achieved through a new spatial relationship-based representation, which describes the kinematics of the body parts by the weighted sum of translation vectors relative to points selectively sampled over the surfaces of the mesh structures. In contrast to previous discrete representations that either only handle static spatial relationships, or require offline, costly optimization processes, our continuous framework smoothly adapts the motion of a character to large updates of the mesh structures and character morphologies on-the-fly, while preserving the original context of the scene. The experimental results show that our method can be used for a wide range of applications, including motion retargeting, interactive character control and deformation transfer for scenes that involve close interactions. Our framework is useful for artists who need to design animated scenes interactively, and modern computer games that allow users to design their own characters, objects and environments.
Article
In this paper, we present a technique for retargetting motion: the problem of adapting an animated motion from one character to another. Our focus is on adapting the motion of one articulated figure to another figure with identical structure but different segment lengths, although we use this as a step when considering less similar characters. Our method creates adaptations that preserve desirable qualities of the original motion. We identify specific features of the motion as constraints that must be maintained. A spacetime constraints solver computes an adapted motion that re-establishes these constraints while preserving the frequency characteristics of the original signal. We demonstrate our approach on motion capture data.
Conference Paper
We present a novel technique for editing motion captured animation. Our iterative solver produces physicallyplausible adaptated animations that satisfy alterations in foot and hand contact placement with the animated character's surroundings. The technique uses a system of particles to represent the poses and mass distribution of the character at sampled frames of the animation. Constraints between the vertices within each frame enforce the skeletal structure, including joint limits. Novel constraints extending over vertices in several frames enforce the aggregate dynamics of the character, as well as features such as joint acceleration smoothness. We demonstrate adaptation of several animations to altered foot and hand placement.
Conference Paper
This paper presents a physics-based method for creating complex multi-c haracter motions from short single- character sequences. We represent multi-character motion synthesis as a spacetime optimization problem where constraints represent the desired character interactions. We extend sta ndard spacetime optimization with a novel timewarp parameterization in order to jointly optimize the motion and the interaction constraints. In addition, we present an optimization algorithm based on block coordinate descent a nd continuations that can be used to solve large problems multiple characters usually generate. This framewor k allows us to synthesize multi-character motion drastically different from the input motion. Consequently, a small set of input motion dataset is sufficient to express a wide variety of multi-character motions.