Structure learning for 3D Point Cloud Generation from Single RGB Images
Résumé
3D point clouds can represent complex 3D objects of arbitrary topologies and with fine‐grained details. They are, however, hard to regress from images using convolutional neural networks, making tasks such as 3D reconstruction from monocular RGB images challenging. In fact, unlike images and volumetric grids, point clouds are unstructured and thus lack proper parameterization, which makes them difficult to process using convolutional operations. Existing point‐based 3D reconstruction methods that tried to address this problem rely on complex end‐to‐end architectures with high computational costs. Instead, we propose in this paper a novel mechanism that decouples the 3D reconstruction problem from the structure (or parameterization) learning task, making the 3D reconstruction of objects of arbitrary topologies tractable and thus easier to learn. We achieve this using a novel Teacher‐Student network where the Teacher learns to structure the point clouds. The Student then harnesses the knowledge learned by the Teacher to efficiently regress accurate 3D point clouds. We train the Teacher network using 3D ground‐truth supervision and the Student network using the Teacher's annotations. Finally, we employ a novel refinement network to overcome the upper‐bound performance that is set by the Teacher network. Our extensive experiments on ShapeNet and Pix3D benchmarks, and on in‐the‐wild images demonstrate that the proposed approach outperforms previous methods in terms of reconstruction accuracy and visual quality.