Learned prototypes and comparisons. We compare the prototypes from our different shape modeling discovered in ABC [32] (left, 5 shape models out of 10) and ShapeNetCore [8] (right, 5 shape models out of 55). Note how sharp the prototypes become when the shape modeling complexity increases, respectively with alignment-awareness and 5-dimensional linear families. function V k i : R 3 → R 3 mapping any point in the 3D space to a displacement direction. Writting [c k ] p the 3D coordinates of the p-th point of prototype c k , the 3D coordinates [v k i ] p of the i-th basis vector associated to the point p are [v k i ] p = V k i ([c k ] p ). Intuitively, the pointwise parametrization seems better suited for modeling complex and discontinuous transformations within a shape family such as the appearance/disappearance of object parts. On the contrary the transformations learned with implicit parametrizations are derived from continuous functions of the 3D space and can be expect to be more regular. We compare both settings in Section 4.2, and show that pointwise parametrizations provide better shape reconstructions, but that implicit parametrization yields more interpretable transformations preserving semantic correspondences. Thus, unless specified otherwise, we use the implicit parametrization of the basis in the rest of the paper.

Learned prototypes and comparisons. We compare the prototypes from our different shape modeling discovered in ABC [32] (left, 5 shape models out of 10) and ShapeNetCore [8] (right, 5 shape models out of 55). Note how sharp the prototypes become when the shape modeling complexity increases, respectively with alignment-awareness and 5-dimensional linear families. function V k i : R 3 → R 3 mapping any point in the 3D space to a displacement direction. Writting [c k ] p the 3D coordinates of the p-th point of prototype c k , the 3D coordinates [v k i ] p of the i-th basis vector associated to the point p are [v k i ] p = V k i ([c k ] p ). Intuitively, the pointwise parametrization seems better suited for modeling complex and discontinuous transformations within a shape family such as the appearance/disappearance of object parts. On the contrary the transformations learned with implicit parametrizations are derived from continuous functions of the 3D space and can be expect to be more regular. We compare both settings in Section 4.2, and show that pointwise parametrizations provide better shape reconstructions, but that implicit parametrization yields more interpretable transformations preserving semantic correspondences. Thus, unless specified otherwise, we use the implicit parametrization of the basis in the rest of the paper.

Source publication
Preprint
Full-text available
In this paper, we revisit the classical representation of 3D point clouds as linear shape models. Our key insight is to leverage deep learning to represent a collection of shapes as affine transformations of low-dimensional linear shape models. Each linear model is characterized by a shape prototype, a low-dimensional shape basis and two neural net...

Context in source publication

Context 1
... We present in Figure 3 examples of prototypes learned when successively adding different components of our method. The first line, denoted "Ours, proto", represents the linear families' prototypes learned during the first stage of our training (R proto ). ...