Noname manuscript No.
(will be inserted by the editor)
Interactive Expressive Illustration of 3D City Scenes
Bin Pan · Wei Chen · Xiang Chen · Yang Yu · Qunsheng Peng
the date of receipt and acceptance should be inserted later
Abstract While many approaches have been devel-
oped to visualize 3D city scenes, most of them exhibit
the visualization results in a uniform rendering style.
This paper presents an expressive rendering approach
for visualizing large-scale 3D city scenes with various
rendering styles integrated in a seamless way. Each view
is actually a combination of the photorealistic render-
ing, the nonphotorealistic rendering, and the line draw-
ing, so as to highlight the information that is interesting
for the users and de-emphasize the other that is less im-
portant. At run-time, the users are allowed to specify
their interested locations with pre-determined 3D land-
marks. Our system automatically computes the salience
of each location and visualize the entire scene with em-
phasis in the area of interests. The GPU-based imple-
mentation enables real-time performance, and demon-
strates outstanding practicality.
Keywords City Scenes · Expressive Rendering ·
Due to the rapid development of computer hardware
and the progress in (semi-) automatic data acquisition,
it is now possible to create large-scale 3D city scenes
at reasonable costs. An increasing number of applica-
tions and systems incorporate virtual 3D city scenes
as essential system components such as urban planning
and redevelopment, facility management, etc. Usually,
large-scale 3D city scenes are characterized by a large
number of objects with different types, structures and
The State Key Lab of CAD&CG, Zhejiang University.
hierarchies, yielding a high degree of visual detail. Thus,
they transport a huge amount of information, which fre-
quently causes perceptional and cognitive problems for
the users such as heavy visual clutter and information
overload. This observation reveals a fundamental prob-
lem of visualization of complex 3D city scenes, namely
how to access information efficiently and effectively for
The 3D representation of a 3D city scene serves
as a medium to convey spatial-related information in
a comprehensive way. The requirements on virtual 3D
city scenes vary between different applications. In the
context of tourism, entertainment, or public participa-
tion, a high degree of photorealism is required. In ap-
plications that attempt to provide analytical and ex-
ploratory functionality, visual details of buildings are
not of primary interest and the shape and the struc-
ture are of main concerns. For commercial applications
like Google Earth, WorldWind, and Microsoft Virtual
Earth, fast and accurate access to geospatial informa-
tion becomes more and more important. In almost all
cases, the users are often interested in only a small part
or just a specific aspect of the 3D city scenes. Thus,
how to efficiently present those meaningful informa-
tion to the users is of great importance. A rendering
is an abstraction that favors, preserves, or even em-
phasizes some qualities while sacrificing, suppressing,
or omitting other characteristics that are not the focus
of attention[GSG∗99]. However, most of the current 3D
city illustration applications present all buildings at the
same level of detail, making it difficult for the users to
filter out the useless information. Large areas of the
screen contain useless or even misused pixels with re-
spect to information content and transfer, which are call
dead values[JD08]. For example, a bird-view of the 3D
city scene frequently shows too many details at once
imation is immediately generated to attract the user
7 Conclusion and Future Work
In this paper, we have proposed a novel technique for
the real-time expressive illustration of 3D city scenes.
Based on user interaction in the scene, we emphasize
the user-concerned buildings through the combination
of several rendering techniques. It can improve the vis-
ibility of some interesting buildings for users compared
with the normal rendering style. The users can access
their required information more efficiently and effec-
tively. The method will greatly improve the readability
of modern 3D city.
In the current implementations, objects can be em-
phasized based on their features on the buildings. While
some areas of the scene have too few geometry de-
tails for showing, we would seek more sources such as
texture for expressive rendering. For instance, the fea-
ture pattern in a texture can be extracted and added
to enhance the feature. Meanwhile, some areas which
have too many geometry details need further simplifica-
tion to decrease their visual influence. Thus, a method
for simplifying the geometry model with low impor-
tance and transiting between different levels of details is
needed. In addition, we plan to incorporate more artis-
tic styles in our system.
[ABHY08] Arge L., Berg M. D., Haverkort H., Yi K.: The
priority r-tree: A practically efficient and worst-case
optimal r-tree. ACM Trans. Algorithms 4, 1 (2008),
[BSSS07] Brosz J., Samavati F. F., Sheelagh M. T. C.,
Sousa M. C.: Single camera flexible projection. In
NPAR ’07: Proc. of the 5th international sympo-
sium on Non-photorealistic animation and rendering
(2007), pp. 33–42.
[CDF∗06] Cole F., DeCarlo D., Finkelstein A., Kin K.,
Morley K., Santella A.: Directing gaze in 3d mod-
els with stylized focus. In Proc. of Eurographics Sym-
posium on Rendering ’06 (2006), pp. 377–387.
[CF08] Cole F., Finkelstein A.: Partial visibility for styl-
ized lines. In NPAR 2008 (June 2008).
[CF09] Cole F., Finkelstein A.:
visibility.In Proceedings of I3D 2009 (Feb. 2009),
[DBNF05] D¨ ollner J., Buchholz H., Nienhaus M., Flo-
rian K.: Illustrative visualization of 3d city mod-
els. In Proc. of Visualization and Data Analysis 2005
(2005), pp. 42–51.
[DW03]D¨ ollner J., Walther M.: Real-time expressive ren-
dering of city models. In Proc. of the Seventh Inter-
national Conference on Information Visualization ’03
(2003), p. 245.
Fast high-quality line
[FM02]Forberg A., Mayer. H.: Generalization of 3d build-
ing data based on scalespaces.
Archives of Photogrammetry and Remote Sensing
(2002), pp. 225–230.
[GASP08] Grabler F., Agrawala M., Sumner R. W., Pauly
M.: Automatic generation of tourist maps. ACM TOG
27, 3 (Aug. 2008), 1–11.
[GD] Glander T., D¨ ollner J.:
izing building geometry of complex virtual 3d city
models. In 2nd International Workshop on 3D Geo-
Information: Requirements, Acquisition, Modelling,
[GD07] Glander T., D¨ ollner J.: Cell-based generalization
of 3d building groups with outlier management. In
Proc. of the 15th annual ACM international sympo-
sium on Advances in geographic information systems
’07 (2007), pp. 1–4.
[GGSC98] Gooch A., Gooch B., Shirley P., Cohen E.: A
non-photorealistic lighting model for automatic tech-
nical illustration. In Proc. of SIGGRAPH ’98 (1998),
[GSG∗99] Gooch B., Sloan P.-P. J., Gooch A., Shirley P.,
Riesenfeld R.: Interactive technical illustration. In
I3D ’99: Proceedings of the 1999 symposium on In-
teractive 3D graphics (1999).
[GTD07] Glander T., Trapp M., D¨ ollner J.:
of effective landmark depiction in geovirtual 3d en-
vironments by view-dependent deformation.
International Symposium on LBS and Telecartogra-
phy (October 2007).
[Gut88] Guttman A.: R-trees: a dynamic index structure for
spatial searching. 599–609.
Schlechtweg S., Strothotte T.:
guide to silhouette algorithms for polygonal models.
IEEE Computer Graphics Application 23, 4 (2003).
[JD08] Jobst M., D¨ ollner J.: 3d city model visualization
with cartography-oriented design. In REAL CORP
Proc. Vienna, May 19-21 2008 (2008).
[KHG03]Kosara R., Hauser H., Gresh D. L.: An interac-
tion view on information visualization. In State-of-
the-Art Proceedings of EUROGRAPHICS 2003 (EG
2003) (2003), pp. 123–137.
[KMH01] Kosara R., Miksch S., Hauser H.: Semantic depth
of field. In INFOVIS ’01: Proc. of the IEEE Sympo-
sium on Information Visualization ’01 (2001), p. 97.
[LTD08]Lorenz H., Trapp M., D¨ ollner J.:
multi-perspective views of virtual 3d landscape and
city models. In Lecture Notes in Geoinformation and
Cartography ’08 (2008), pp. 301–321.
[May99] Mayer H.: Scale-space events for the generalization of
3d-building data. In International Archives of Pho-
togrammetry and Remote Sensing (1999), pp. 639–
[MDWK] M¨ oser S., Degener P., Wahl R., Klein R.: Context
aware terrain visualization for wayfinding and naviga-
tion. Computer Graphics Forum 27, 7.
[SSS] Straber W., Stoev S. L., Schmalstieg D.:
through-the-lens metaphor: Taxonomy and applica-
tion. In Proc. of the IEEE Virtual Reality.
[TGBD]Trapp M., Glander T., Buchholz H., D¨ ollner J.:
3d generalization lenses for interactive focus + con-
text visualization of virtual city models. In Proc. of
the 12th International Conference on Information Vi-
[Thi]Thiemann F.: Generalization of 3d building data. In
Proc. of Joint International Symposium on GeoSpa-
tial Theory, Processing and Applications.
Techniques for general-