120 reads in the past 30 days
Computational Smocking through Fabric‐Thread InteractionApril 2024
·
577 Reads
·
2 Citations
Published by Wiley and Eurographics - European Association for Computer Graphics
Online ISSN: 1467-8659
Disciplines: Computer science
120 reads in the past 30 days
Computational Smocking through Fabric‐Thread InteractionApril 2024
·
577 Reads
·
2 Citations
88 reads in the past 30 days
Estimating Cloth Simulation Parameters From Tag Information and Cusick Drape TestApril 2024
·
109 Reads
·
1 Citation
72 reads in the past 30 days
Polygon Laplacian Made RobustApril 2024
·
168 Reads
·
1 Citation
69 reads in the past 30 days
Graph‐Based Synthesis for Skin Micro WrinklesAugust 2023
·
117 Reads
·
4 Citations
69 reads in the past 30 days
Real‐Time Underwater Spectral RenderingApril 2024
·
393 Reads
Computer Graphics Forum is the premier journal for in-depth technical articles on computer graphics. Published jointly by Wiley and Eurographics, we enable our readers to keep pace with this fast-moving field with our rapid publication times and coverage of major international computer graphics events, including the annual Eurographics Conference proceedings. We welcome original research papers on a wide range of topics including image synthesis, rendering, perception and visualisation.
December 2024
·
28 Reads
Research shows that user traits can modulate the use of visualization systems and have a measurable influence on users' accuracy, speed, and attention when performing visual analysis. This highlights the importance of user‐adaptive visualization that can modify themselves to the characteristics and preferences of the user. However, there are very few such visualization systems, as creating them requires broad knowledge from various sub‐domains of the visualization community. A user‐adaptive system must consider which user traits they adapt to, their adaptation logic and the types of interventions they support. In this STAR, we survey a broad space of existing literature and consolidate them to structure the process of creating user‐adaptive visualizations into five components: Capture Ⓐ Input from the user and any relevant peripheral information. Perform computational Ⓑ User Modelling with this input to construct a Ⓒ User Representation. Employ Ⓓ Adaptation Assignment logic to identify when and how to introduce Ⓔ Interventions. Our novel taxonomy provides a road map for work in this area, describing the rich space of current approaches and highlighting open areas for future work.
November 2024
·
5 Reads
User confidence plays an important role in guided visual data analysis scenarios, especially when uncertainty is involved in the analytical process. However, measuring confidence in practical scenarios remains an open challenge, as previous work relies primarily on self‐reporting methods. In this work, we propose a quantitative approach to measure user confidence—as opposed to trust—in an analytical scenario. We do so by exploiting the respective user interaction provenance graph and examining the impact of guidance using a set of network metrics. We assess the usefulness of our proposed metrics through a user study that correlates results obtained from self‐reported confidence assessments and our metrics—both with and without guidance. The results suggest that our metrics improve the evaluation of user confidence compared to available approaches. In particular, we found a correlation between self‐reported confidence and some of the proposed provenance network metrics. The quantitative results, though, do not show a statistically significant impact of the guidance on user confidence. An additional descriptive analysis suggests that guidance could impact users' confidence and that the qualitative analysis of the provenance network topology can provide a comprehensive view of changes in user confidence. Our results indicate that our proposed metrics and the provenance network graph representation support the evaluation of user confidence and, subsequently, the effective development of guidance in VA.
November 2024
·
29 Reads
We present a computational approach for unfolding 3D shapes isometrically into the plane as a single patch without overlapping triangles. This is a hard, sometimes impossible, problem, which existing methods are forced to soften by allowing for map distortions or multiple patches. Instead, we propose a geometric relaxation of the problem: We modify the input shape until it admits an overlap‐free unfolding. We achieve this by locally displacing vertices and collapsing edges, guided by the unfolding process. We validate our algorithm quantitatively and qualitatively on a large dataset of complex shapes and show its proficiency by fabricating real shapes from paper.
November 2024
·
22 Reads
Automatic font generation aims to streamline the design process by creating new fonts with minimal style references. This technology significantly reduces the manual labour and costs associated with traditional font design. Image‐to‐image translation has been the dominant approach, transforming font images from a source style to a target style using a few reference images. However, this framework struggles to fully decouple content from style, particularly when dealing with significant style shifts. Despite these limitations, image‐to‐image translation remains prevalent due to two main challenges faced by conditional generative models: (1) inability to handle unseen characters and (2) difficulty in providing precise content representations equivalent to the source font. Our approach tackles these issues by leveraging recent advancements in Chinese character representation research to pre‐train a robust content representation model. This model not only handles unseen characters but also generalizes to non‐existent ones, a capability absent in traditional image‐to‐image translation. We further propose a Transformer‐based Style Filter that not only accurately captures stylistic features from reference images but also handles any combination of them, fostering greater convenience for practical automated font generation applications. Additionally, we incorporate content loss with commonly used pixel‐ and perceptual‐level losses to refine the generated results from a comprehensive perspective. Extensive experiments validate the effectiveness of our method, particularly its ability to handle unseen characters, demonstrating significant performance gains over existing state‐of‐the‐art methods.
November 2024
·
6 Reads
In this paper, we address the problem of plausible object placement for the challenging task of realistic image composition. We propose DiffPop, the first framework that utilizes plausibility‐guided denoising diffusion probabilistic model to learn the scale and spatial relations among multiple objects and the corresponding scene image. First, we train an unguided diffusion model to directly learn the object placement parameters in a self‐supervised manner. Then, we develop a human‐in‐the‐loop pipeline which exploits human labeling on the diffusion‐generated composite images to provide the weak supervision for training a structural plausibility classifier. The classifier is further used to guide the diffusion sampling process towards generating the plausible object placement. Experimental results verify the superiority of our method for producing plausible and diverse composite images on the new Cityscapes‐OP dataset and the public OPA dataset, as well as demonstrate its potential in applications such as data augmentation and multi‐object placement tasks. Our dataset and code will be released.
November 2024
·
27 Reads
Recent advancements in generative models have enabled image editing very effective with impressive results. By extending this progress to 3D geometry models, we introduce iShapEditing, a novel framework for 3D shape editing which is applicable to both generated and real shapes. Users manipulate shapes by dragging handle points to corresponding targets, offering an intuitive and intelligent editing interface. Leveraging the Triplane Diffusion model and robust intermediate feature correspondence, our framework utilizes classifier guidance to adjust noise representations during sampling process, ensuring alignment with user expectations while preserving plausibility. For real shapes, we employ shape predictions at each time step alongside a DDPM‐based inversion algorithm to derive their latent codes, facilitating seamless editing. iShapEditing provides effective and intelligent control over shapes without the need for additional model training or fine‐tuning. Experimental examples demonstrate the effectiveness and superiority of our method in terms of editing accuracy and plausibility.
November 2024
·
5 Reads
We introduce LGSur‐Net, an end‐to‐end deep learning architecture, engineered for the upsampling of sparse point clouds. LGSur‐Net harnesses a trainable Gaussian local representation by positioning a series of Gaussian functions on an oriented plane, complemented by the optimization of individual covariance matrices. The integration of parametric factors allows for the encoding of the plane's rotational dynamics and Gaussian weightings into a linear transformation matrix. Then we extract the feature maps from the point cloud and its adjoining edges and learn the local Gaussian depictions to accurately model the shape's local geometry through an attention‐based network. The Gaussian representation's inherent high‐order continuity endows LGSur‐Net with the natural ability to predict surface normals and support upsampling to any specified resolution. Comprehensive experiments validate that LGSur‐Net efficiently learns from sparse data inputs, surpassing the performance of existing state‐of‐the‐art upsampling methods. Our code is publicly available at https://github.com/Rangiant5b72/LGSur-Net.
November 2024
·
19 Reads
·
1 Citation
We introduce 𝒢‐Style, a novel algorithm designed to transfer the style of an image onto a 3D scene represented using Gaussian Splatting. Gaussian Splatting is a powerful 3D representation for novel view synthesis, as—compared to other approaches based on Neural Radiance Fields—it provides fast scene renderings and user control over the scene. Recent pre‐prints have demonstrated that the style of Gaussian Splatting scenes can be modified using an image exemplar. However, since the scene geometry remains fixed during the stylization process, current solutions fall short of producing satisfactory results. Our algorithm aims to address these limitations by following a three‐step process: In a pre‐processing step, we remove undesirable Gaussians with large projection areas or highly elongated shapes. Subsequently, we combine several losses carefully designed to preserve different scales of the style in the image, while maintaining as much as possible the integrity of the original scene content. During the stylization process and following the original design of Gaussian Splatting, we split Gaussians where additional detail is necessary within our scene by tracking the gradient of the stylized color. Our experiments demonstrate that 𝒢‐Style generates high‐quality stylizations within just a few minutes, outperforming existing methods both qualitatively and quantitatively.
November 2024
·
15 Reads
·
1 Citation
Personalization techniques for large text‐to‐image (T2I) models allow users to incorporate new concepts from reference images. However, existing methods primarily rely on textual descriptions, leading to limited control over customized images and failing to support fine‐grained and local editing (e.g., shape, pose, and details). In this paper, we identify sketches as an intuitive and versatile representation that can facilitate such control, e.g., contour lines capturing shape information and flow lines representing texture. This motivates us to explore a novel task of sketch concept extraction: given one or more sketch‐image pairs, we aim to extract a special sketch concept that bridges the correspondence between the images and sketches, thus enabling sketch‐based image synthesis and editing at a fine‐grained level. To accomplish this, we introduce CustomSketching, a two‐stage framework for extracting novel sketch concepts via few‐shot learning. Considering that an object can often be depicted by a contour for general shapes and additional strokes for internal details, we introduce a dual‐sketch representation to reduce the inherent ambiguity in sketch depiction. We employ a shape loss and a regularization loss to balance fidelity and editability during optimization. Through extensive experiments, a user study, and several applications, we show our method is effective and superior to the adapted baselines.
November 2024
·
19 Reads
Radiance field methods represent the state of the art in reconstructing complex scenes from multi‐view photos. However, these reconstructions often suffer from one or both of the following limitations: First, they typically represent scenes in low dynamic range (LDR), which restricts their use to evenly lit environments and hinders immersive viewing experiences. Secondly, their reliance on a pinhole camera model, assuming all scene elements are in focus in the input images, presents practical challenges and complicates refocusing during novel‐view synthesis. Addressing these limitations, we present a lightweight method based on 3D Gaussian Splatting that utilizes multi‐view LDR images of a scene with varying exposure times, apertures, and focus distances as input to reconstruct a high‐dynamic‐range (HDR) radiance field. By incorporating analytical convolutions of Gaussians based on a thin‐lens camera model as well as a tonemapping module, our reconstructions enable the rendering of HDR content with flexible refocusing capabilities. We demonstrate that our combined treatment of HDR and depth of field facilitates real‐time cinematic rendering, outperforming the state of the art.
November 2024
·
28 Reads
As a fundamental problem in computer vision, 3D point cloud registration (PCR) aims to seek the optimal transformation to align point cloud pairs. Meanwhile, the equivariance lies at the core of matching point clouds at arbitrary pose. In this paper, we propose GETr, a geometric equivariant transformer for PCR. By learning the point‐wise orientations, we decouple the coordinate to the pose of the point clouds, which is the key to achieve equivariance in our framework. Then we utilize attention mechanism to learn the geometric features for superpoints matching, the proposed novel self‐attention mechanism encodes the geometric information of point clouds. Finally, the coarse‐to‐fine manner is used to obtain high‐quality correspondence for registration. Extensive experiments on both indoor and outdoor benchmarks demonstrate that our method outperforms various existing state‐of‐the‐art methods.
November 2024
·
48 Reads
The capability to generate simulation‐ready garment models from 3D shapes of clothed people will significantly enhance the interpretability of captured geometry of real garments, as well as their faithful reproduction in the digital world. This will have notable impact on fields like shape capture in social VR, and virtual try‐on in the fashion industry. To align with the garment modeling process standardized by the fashion industry and cloth simulation software, it is required to recover 2D patterns, which are then placed around the wearer's body model and seamed prior to the draping simulation. This involves an inverse garment design problem, which is the focus of our work here: Starting with an arbitrary target garment geometry, our system estimates its animatable replica along with its corresponding 2D pattern. Built upon a differentiable cloth simulator, it runs an optimization process that is directed towards minimizing the deviation of the simulated garment shape from the target geometry, while maintaining desirable properties such as left‐to‐right symmetry. Experimental results on various real‐world and synthetic data show that our method outperforms state‐of‐the‐art methods in producing both high‐quality garment models and accurate 2D patterns.
November 2024
·
6 Reads
The appearance of a real‐world feather results from the complex interaction of light with its multi‐scale biological structure, including the central shaft, branching barbs, and interlocking barbules on those barbs. In this work, we propose a practical surface‐based appearance model for feathers. We represent the far‐field appearance of feathers using a BSDF that implicitly represents the light scattering from the main biological structures of a feather, such as the shaft, barb and barbules. Our model accounts for the particular characteristics of feather barbs such as the non‐cylindrical cross‐sections and the scattering media via a numerically‐based BCSDF. To model the relative visibility between barbs and barbules, we derive a masking term for the differential projected areas of the different components of the feather's microgeometry, which allows us to analytically compute the masking between barbs and barbules. As opposed to previous works, our model uses a lightweight representation of the geometry based on a 2D texture, and does not require explicitly representing the barbs as curves. We show the flexibility and potential of our appearance model approach to represent the most important visual features of several pennaceous feathers.
November 2024
·
7 Reads
Large‐scale urban point clouds play a vital role in various applications, while rendering and transmitting such data remains challenging due to its large volume, complicated structures, and significant redundancy. In this paper, we present LightUrban, the first point cloud instancing framework for efficient rendering and transmission of fine‐grained complex urban scenes. We first introduce a segmentation method to organize the point clouds into individual buildings and vegetation instances from coarse to fine. Next, we propose an unsupervised similarity detection approach to accurately group instances with similar shapes. Furthermore, a fast pose and size estimation component is applied to calculate the transformations between the representative instance and the corresponding similar instances in each group. By replacing individual instances with their group's representative instances, the data volume and redundancy can be dramatically reduced. Experimental results on large‐scale urban scenes demonstrate the effectiveness of our algorithm. To sum up, our method not only structures the urban point clouds but also significantly reduces data volume and redundancy, filling the gap in lightweighting urban landscapes through instancing.
November 2024
·
19 Reads
Gradient meshes are a vector graphics primitive formed by a regular grid of bicubic quad patches. They allow for the creation of complex geometries and colour gradients, with recent extensions supporting features such as local refinement and sharp colour transitions. While many methods exist for recolouring raster images, often achieved by modifying an automatically detected palette of the image, gradient meshes have not received the same amount of attention when it comes to global colour editing. We present a novel method that allows for real‐time palette‐based recolouring of gradient meshes, including gradient meshes constructed using local refinement and containing sharp colour transitions. We demonstrate the utility of our method on synthetic illustrative examples as well as on complex gradient meshes.
November 2024
·
23 Reads
The emergence of learning‐based motion in‐betweening techniques offers animators a more efficient way to animate characters. However, existing non‐generative methods either struggle to support long transition generation or produce results that lack diversity. Meanwhile, diffusion models have shown promising results in synthesizing diverse and high‐quality motions driven by text and keyframes. However, in these methods, keyframes often serve as a guide rather than a strict constraint and can sometimes be ignored when keyframes are sparse. To address these issues, we propose a lightweight yet effective diffusion‐based motion in‐betweening framework that generates animations conforming to keyframe constraints. We incorporate keyframe constraints into the training phase to enhance robustness in handling various constraint densities. Moreover, we employ relative positional encoding to improve the model's generalization on long range in‐betweening tasks. This approach enables the model to learn from short animations while generating realistic in‐betweening motions spanning thousands of frames. We conduct extensive experiments to validate our framework using the newly proposed metrics K‐FID, K‐Diversity, and K‐Error, designed to evaluate generative in‐betweening methods. Results demonstrate that our method outperforms existing diffusion‐based methods across various lengths and keyframe densities. We also show that our method can be applied to text‐driven motion synthesis, offering fine‐grained control over the generated results.
November 2024
·
2 Reads
This paper introduces a novel continual learning framework for synthesising novel views of multiple scenes, learning multiple 3D scenes incrementally, and updating the network parameters only with the training data of the upcoming new scene. We build on Neural Radiance Fields (NeRF), which uses multi‐layer perceptron to model the density and radiance field of a scene as the implicit function. While NeRF and its extensions have shown a powerful capability of rendering photo‐realistic novel views in a single 3D scene, managing these growing 3D NeRF assets efficiently is a new scientific problem. Very few works focus on the efficient representation or continuous learning capability of multiple scenes, which is crucial for the practical applications of NeRF. To achieve these goals, our key idea is to represent multiple scenes as the linear combination of a cross‐scene weight matrix and a set of scene‐specific weight matrices generated from a global parameter generator. Furthermore, we propose an uncertain surface knowledge distillation strategy to transfer the radiance field knowledge of previous scenes to the new model. Representing multiple 3D scenes with such weight matrices significantly reduces memory requirements. At the same time, the uncertain surface distillation strategy greatly overcomes the catastrophic forgetting problem and maintains the photo‐realistic rendering quality of previous scenes. Experiments show that the proposed approach achieves state‐of‐the‐art rendering quality of continual learning NeRF on NeRF‐Synthetic, LLFF, and TanksAndTemples datasets while preserving extra low storage cost.
November 2024
·
37 Reads
Cameras cannot capture the same colors as those seen by the human eye because the eye and the cameras' sensors differ in their spectral sensitivity. To obtain a plausible approximation of perceived colors, the camera's Image Signal Processor (ISP) employs a color correction step. However, even advanced color correction methods cannot solve this underdetermined problem, and visible color inaccuracies are always present. Here, we explore an approach in which we can capture accurate colors with a regular camera by optimizing the spectral composition of the illuminant and capturing one or more exposures. We jointly optimize for the signal‐to‐noise ratio and for the color accuracy irrespective of the spectral composition of the scene. One or more images captured under controlled multispectral illuminants are then converted into a color‐accurate image as seen under the standard illuminant of D65. Our optimization allows us to reduce the color error by 20–60% (in terms of CIEDE 2000), depending on the number of exposures and camera type. The method can be used in applications in which illumination can be controlled, and high colour accuracy is required, such as product photography or with a multispectral camera flash. The code is available at https://github.com/gfxdisp/multispectral_color_correction.
November 2024
·
7 Reads
Holography stands at the forefront of visual technology, offering immersive, three‐dimensional visualizations through the manipulation of light wave amplitude and phase. Although generative models have been extensively explored in the image domain, their application to holograms remains relatively underexplored due to the inherent complexity of phase learning. Exploiting generative models for holograms offers exciting opportunities for advancing innovation and creativity, such as semantic‐aware hologram generation and editing. Currently, the most viable approach for utilizing generative models in the hologram domain involves integrating an image‐based generative model with an image‐to‐hologram conversion model, which comes at the cost of increased computational complexity and inefficiency. To tackle this problem, we introduce P‐Hologen, the first end‐to‐end generative framework designed for phase‐only holograms (POHs). P‐Hologen employs vector quantized variational autoencoders to capture the complex distributions of POHs. It also integrates the angular spectrum method into the training process, constructing latent spaces for complex phase data using strategies from the image processing domain. Extensive experiments demonstrate that P‐Hologen achieves superior quality and computational efficiency compared to the existing methods. Furthermore, our model generates high‐quality unseen, diverse holographic content from its learned latent space without requiring pre‐existing images. Our work paves the way for new applications and methodologies in holographic content creation, opening a new era in the exploration of generative holographic content. The code for our paper is publicly available on https://github.com/james0223/P-Hologen.
November 2024
·
32 Reads
3D Gaussian Splatting (3DGS) has emerged as a promising representation for scene reconstruction and novel view synthesis for its explicit representation and real‐time capabilities. This technique thus holds immense potential for use in mapping applications. Consequently, there is a growing need for an efficient and effective camera relocalization method to complement the advantages of 3DGS. This paper presents a camera relocalization method, namely GauLoc, in a scene represented by 3DGS. Unlike previous methods that rely on pose regression or photometric alignment, our proposed method leverages the differential rendering capability provided by 3DGS. The key insight of our work is the proposed implicit featuremetric alignment, which effectively optimizes the alignment between rendered keyframes and the query frames, and leverages the epipolar geometry to facilitate the convergence of camera poses conditioned explicit 3DGS representation. The proposed method significantly improves the relocalization accuracy even in complex scenarios with large initial camera rotation and translation deviations. Extensive experiments validate the effectiveness of our proposed method, showcasing its potential to be applied in many real‐world applications. Source code will be released at https://github.com/xinzhe11/GauLoc.
November 2024
·
3 Reads
We describe a method to convert 3D shapes into neural implicit form such that the shape is approximated in a guaranteed conservative manner. This means the input shape is strictly contained inside the neural implicit or, alternatively, vice versa. Such conservative approximations are of interest in a variety of applications, including collision detection, occlusion culling, or intersection testing. Our approach is the first to guarantee conservativeness in this context of neural implicits. We support input given as mesh, voxel set, or implicit function. Adaptive affine arithmetic is employed in the neural network fitting process, enabling the reasoning over infinite sets of points despite using a finite set of training data. Combined with an interior point style optimization approach this yields the desired guarantee.
November 2024
·
25 Reads
Garment alteration is a practical technique to adapt an existing garment to fit a target body shape. Typically executed by skilled tailors, this process involves a series of strategic fabric operations—removing or adding material—to achieve the desired fit on a target body. We propose an innovative approach to automate this process by computing a set of practically feasible modifications that adapt an existing garment to fit a different body shape. We first assess the garment's fit on a reference body; then, we replicate this fit on the target by deriving a set of pattern modifications via a linear program. We compute these alterations by employing an iterative process that alternates between global geometric optimization and physical simulation. Our method utilizes geometry‐based simulation of woven fabric's anisotropic behavior, accounts for tailoring details like seam matching, and incorporates elements such as darts or gussets. We validate our technique by producing digital and physical garments, demonstrating practical and achievable alterations.
November 2024
·
30 Reads
We present VRTree, an example‐based interactive virtual reality (VR) system designed to efficiently create diverse 3D tree models while faithfully preserving botanical characteristics of real‐world references. Our method employs a novel representation called Hierarchical Branch Lobe (HBL), which captures the hierarchical features of trees and serves as a versatile intermediary for intuitive VR interaction. The HBL representation decomposes a 3D tree into a series of concise examples, each consisting of a small set of main branches, secondary branches, and lobe‐bounded twigs. The core of our system involves two key components: (1) We design an automatic algorithm to extract an initial library of HBL examples from real tree point clouds. These HBL examples can be optionally refined according to user intentions through an interactive editing process. (2) Users can interact with the extracted HBL examples to assemble new tree structures, ensuring the local features align with the target tree species. A shape‐guided procedural growth algorithm then transforms these assembled HBL structures into highly realistic, finegrained 3D tree models. Extensive experiments and user studies demonstrate that VRTree outperforms current state‐of‐the‐art approaches, offering a highly effective and easy‐to‐use VR tool for tree modeling.
November 2024
·
8 Reads
Uncovering causal relations from event sequences to guide decision‐making has become an essential task across various domains. Unfortunately, this task remains a challenge because real‐world event sequences are usually collected from multiple sources. Most existing works are specifically designed for homogeneous causal analysis between events from a single source, without considering cross‐source causality. In this work, we propose a heterogeneous causal analysis algorithm to detect the heterogeneous causal network between high‐level events in multi‐source event sequences while preserving the causal semantic relationships between diverse data sources. Additionally, the flexibility of our algorithm allows to incorporate high‐level event similarity into learning model and provides a fuzzy modification mechanism. Based on the algorithm, we further propose a visual analytics framework that supports interpreting the causal network at three granularities and offers a multi‐granularity modification mechanism to incorporate user feedback efficiently. We evaluate the accuracy of our algorithm through an experimental study, illustrate the usefulness of our system through a case study, and demonstrate the efficiency of our modification mechanisms through a user study.
November 2024
·
46 Reads
The simulation and modelling of tree growth is a complex subject with a long history and an important area of research in both computer graphics and botany. For more than 50 years, new approaches to this topic have been presented frequently, including several aspects to increase realism. To further improve these achievements, we present a compact and robust functional‐structural plant model (FSPM) that is consistent with botanical rules. While we show several extensions to typical approaches, we focus mainly on the distribution of light as a resource in three‐dimensional space. We therefore present four different light distribution models based on ray tracing, space colonization, voxel‐based approaches and bounding volumes. By simulating individual light sources, we were able to create a more specified scene setup for plant simulation than it has been presented in the past. By taking into account such a more accurate distribution of light in the environment, this technique is capable of modelling realistic and diverse tree models.
Journal Impact Factor™
Acceptance rate
CiteScore™
Submission to first decision
Article processing charge
Editors
Inria Center at Université Côte d'Azur, France
Editors
TU Wien, Austria