2023 journal article

NOPE-SAC: Neural One-Plane RANSAC for Sparse-View Planar 3D Reconstruction

IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–15.

By: B. Tan, N. Xue, T. Wu* & G. Xia

TL;DR: A novel Neural One-PlanE RANSAC framework (termed NOPE-SAC) that exerts excellent capability of neural networks to learn one-plane pose hypotheses from 3D plane correspondences that significantly improves the camera pose estimation for the two-view inputs with severe viewpoint changes. (via Semantic Scholar)
Source: ORCID
Added: October 11, 2023

This article studies the challenging two-view 3D reconstruction problem in a rigorous sparse-view configuration, which is suffering from insufficient correspondences in the input image pairs for camera pose estimation. We present a novel Neural One-PlanE RANSAC framework (termed NOPE-SAC in short) that exerts excellent capability of neural networks to learn one-plane pose hypotheses from 3D plane correspondences. Building on the top of a Siamese network for plane detection, our NOPE-SAC first generates putative plane correspondences with a coarse initial pose. It then feeds the learned 3D plane correspondences into shared MLPs to estimate the one-plane camera pose hypotheses, which are subsequently reweighed in a RANSAC manner to obtain the final camera pose. Because the neural one-plane pose minimizes the number of plane correspondences for adaptive pose hypotheses generation, it enables stable pose voting and reliable pose refinement with a few of plane correspondences for the sparse-view inputs. In the experiments, we demonstrate that our NOPE-SAC significantly improves the camera pose estimation for the two-view inputs with severe viewpoint changes, setting several new state-of-the-art performances on two challenging benchmarks, i.e., MatterPort3D and ScanNet, for sparse-view 3D reconstruction.