Conference Paper

GPU techniques for creating visually diverse crowds in real-time.

DOI: 10.1145/1450579.1450596 Conference: Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST 2008, Bordeaux, France, October 27-29, 2008
Source: DBLP

ABSTRACT Real-time crowds significantly improve the realism of virtual environments, therefore their use has increased considerably over the last few years in a variety of applications, including real-time games and virtual tourism. However, due to current hardware limitations, crowd variety tends to be sacrificed in order for the crowd simulation to execute in real-time, which decreases the quality and realism of the crowd. Currently the little variety that is incorporated in real-time crowds tends to be applied by modulating each avatar with random colours, which has a detrimental effect on the texture quality. Furthermore, the existing crowd variety is often hard to define and control. To overcome these problems a set of techniques are presented, which defines and controls crowd variety, to further improve on current variety and quality of crowds. These techniques permit variety to be introduced: by changing the body mass via the application of a displacement map onto the mesh; by scaling the skeleton of the avatar; by applying HSV colour shifts to different parts of the avatar; and by transferring textures between avatar models. The appearance of the avatars under animation is also improved via the use of muscle displacement within the mesh. With the new techniques, the visual quality of the crowd is improved due to the increase in diversity.

0 Bookmarks
 · 
55 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Despite its popularity, agent-based modeling is limited by serious barriers that constrain its usefulness as an exploratory tool. In particular, there is a paucity of systematic approaches for extracting coarse-grained, system-level information as it emerges in direct simulation. This is particularly problematic for agent-based models (ABMs) of complex urban systems in which macroscopic phenomena, such as sprawl, may manifest themselves coarsely from bottom-up dynamics among diverse agent-actors interacting across scales. Often these connections are not known, but treating them is nevertheless crucial in enabling prediction, in supporting decisions, and in facilitating the design, control, and optimization of urban systems. In this article, we describe and implement a metasimulation scheme for extracting macroscopic information from local dynamics of agent-based simulation, which allows acceleration of coarse-scale computing and which may also serve as a precursor to handle emergence in complex urban simulation. We compare direct ABM simulation, population-level equation solutions, and coarse projective integration. We apply the scheme to the simulation of urban sprawl from local drivers of urbanization, urban growth, and population dynamics. Numerical examples of the three approaches are provided to compare their accuracy and efficiency. We find that our metasimulation scheme can significantly accelerate complex urban simulations while maintaining faithful representation of the original model.
    International Journal of Geographical Information Science 10/2012; · 1.61 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In current computer games and simulation environments, individuality of virtual character bodies are mainly constructed using different textures and accessories. However, this type of modeling generates anthropometrically similar shapes due to the reliance on a single or few body models. Alternatively, using large variety of body size models require larger storage resources and design efforts. We present an efficient method for generating and storing variety of body size models derived from a skinned template. Our method doesn't require additional design efforts and uses the existing skinning data that are already attached to the template model. Algorithm used for sizing the model is based on anthropometric body measurement standards that are used in ergonomic design application. Resulting new body size models use the same skinning information for animation by adapting the underlying skeleton according to the anthropometric parameters. Our developed system is useful in CAD applications from ergonomic design of cloths to parametrically resizing avatars.
    Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST 2009, Kyoto, Japan, November 18-20, 2009; 01/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: https://www.youtube.com/watch?v=xE7tK7wWinM Large-scale crowd simulation and visualization is crucial for the next generation of interactive virtual environments. Current authoring techniques produce good results but are laborious, and demand valuable graphics memory and computational resources beyond the reach of consumer-level hardware. In this paper, we propose a technique for generating animatable characters for crowds that reduces memory requirements. The first step consists in reducing, segmenting and labeling a data set of virtual characters into simpler body parts; labeling information is then manually generated and used to correctly match different body parts in order to generate new characters. The second step comprises a method to embed the rig and skinning information into the texture space shared among the new characters. Additional methods using color, skin features, pattern, fat, wrinkle and textile fold maps are used to add more variety. Animation sequences are stored in auxiliary textures. These can be transferred between different characters, as well as versions of the same characters with different level of detail; such animations can be modified, and otherwise reused, increasing variety yet reducing memory requirements. We will demonstrate that our technique has four advantages: first, memory requirements are reduced by 91% when compared to traditional libraries; second, it can also generate previously nonexistent characters from the original data set; third, embedding the rig and skinning into texture space allows painless animation transfer between different characters and fourth, between different levels of detail of the same characters.
    Motion in Games, Dublin, Ireland; 11/2013