Danillo B. Graziosi’s research while affiliated with Sony Corporation and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (5)


Dynamic Mesh Coding Using Orthogonal Atlas Projection
  • Conference Paper

June 2024

·

3 Reads

Danillo B. Graziosi

·

Kao Hayashi

Illustration of point cloud (top row) and mesh (bottom row) representations. A 3D point cloud is a discrete set of points in the 3D space (a). Each point can be assigned a color (b). Basic rendering of a point cloud leads to non continuous surfaces (b, c). Hence, point clouds often require a huge amount of points for the rendering (e.g. millions). Some splatting methods allow to fill these inter-point spaces at the rendering stage (d). A 3D mesh is a collection of vertices (3D points), edges and faces that defines a polyhedral surface (e). Faces usually consist of triangles. Some color images, named texture maps (f), can be mapped onto these triangles to colorize the surface (g). The final rendering of a textured mesh produces a continuous surface (h). Longdress model from MPEG dataset, courtesy 8i [1]
The 11 source models used to create our dataset. They are sequences of textured meshes composed of 300 frames each
Illustrations of the visual effects of the introduced distortions, for the redandblack volumetric video
Mean Opinion Scores (MOS) and 95%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$95\%$$\end{document} confidence intervals for all stimuli of our dataset
Mesh grid sampling. The mesh surface points sampled by ray-casting from the grid at the sequence bounding box are added to the final point cloud

+5

Crafting the MPEG metrics for objective and perceptual quality assessment of volumetric videos
  • Article
  • Publisher preview available

June 2023

·

178 Reads

·

5 Citations

Quality and User Experience

Efficient objective and perceptual metrics are valuable tools to evaluate the visual impact of compression artifacts on the visual quality of volumetric videos (VVs). In this paper, we present some of the MPEG group efforts to create, benchmark and calibrate objective quality assessment metrics for volumetric videos represented as textured meshes. We created a challenging dataset of 176 volumetric videos impaired with various distortions and conducted a subjective experiment to gather human opinions (more than 5896 subjective scores were collected). We adapted two state-of-the-art model-based metrics for point cloud evaluation to our context of textured mesh evaluation by selecting efficient sampling methods. We also present a new image-based metric for the evaluation of such VVs whose purpose is to reduce the cumbersome computation times inherent to the point-based metrics due to their use of multiple kd-tree searches. Each metric presented above is calibrated (i.e., selection of best values for parameters such as the number of views or grid sampling density) and evaluated on our new ground-truth subjective dataset. For each metric, the optimal selection and combination of features is determined by logistic regression through cross-validation. This performance analysis, combined with MPEG experts’ requirements, lead to the validation of two selected metrics and recommendations on the features of most importance through learned feature weights.

View access options



Citations (4)


... Thus, a 3D-to-2D projection-based mechanism (Yang et al., 2020) enables a simplified objective comparison between point cloud quality evaluation and conventional image-based measurements. Consequently, with aim to build an open standard for compactly representing 3D point clouds, the Moving Picture Experts Group (MPEG) proposes establishing quality metrics, such as point-to-point (p2point), point-to-plane (p2plane) and point-to-mesh (p2mesh) (Marvie et al., 2023). The p2point metric quantifies the distances between corresponding points to measure the degree of distortion, p2plane projects the obtained p2point distances along the surface normal direction, while p2mesh reconstructs the surface and then measures the distance from a point to the surface, but the efficiency is strongly dependent on the accuracy of the surface reconstruction algorithm . ...

Reference:

GEOMETRIC ACCURACY ANALYSIS BETWEEN NEURAL RADIANCE FIELDS (NERFS) AND TERRESTRIAL LASER SCANNING (TLS)
Crafting the MPEG metrics for objective and perceptual quality assessment of volumetric videos

Quality and User Experience

... T HE compression of three-dimensional (3D) mesh models has been a topic of interest in the research community for a number of decades [1], but the recent appearance of volumetric video content has given rise to a new challenge: the efficient encoding of dynamic time-varying meshes (TVMs) [2]. Unlike dynamic animated meshes, for which a number of compression algorithms have already been developed in the past [1], [2], the biggest challenge with TVMs is that they can have a variable number of vertices, different connectivity and topology, and different attribute data, for each frame of a volumetric video. ...

Coding of dynamic 3D meshes
  • Citing Chapter
  • January 2023

... Extending 2D video block matching, [26] divided a mesh into cubic blocks which search for the best match block in the reference mesh frame, while a patch-based matching algorithm was used in [50]. Recently, [19] used the new Visual Volumetric Video-based Coding (V3C) standard to encode meshes by using orthogonal projections, followed by atlas packing and video coding. By sending the connectivity patch for every mesh frame, this method is able to deal with sequences with varying connectivity. ...

Video-Based Dynamic Mesh Coding
  • Citing Conference Paper
  • September 2021