3D Reconstruction from Unoriented Point Clouds

← Back to Home


In real 3D acquisition pipelines, point clouds produced by scanning or multi-view reconstruction are often noisy, incomplete, unevenly sampled, and lack globally consistent normals. For decades, the dominant paradigm decomposed surface reconstruction into two seemingly independent stages: (i) computing globally consistent point orientations from raw data, and (ii) reconstructing the surface from the oriented points.

Although each stage was studied extensively, this pipeline had a fundamental weakness: orientation became a fragile preprocessing requirement. Errors introduced during orientation propagated irreversibly to the final reconstruction, particularly for noisy inputs, sparse sampling, thin structures, and large-scale scenes.

My SIGGRAPH 2022 paper on Iterative Poisson Surface Reconstruction (iPSR) introduced a fundamentally different formulation. Instead of treating orientation and reconstruction as sequential tasks, iPSR demonstrated that they are two coupled aspects of the same geometric inference problem that can be solved jointly within a unified framework. In iPSR, normals are treated as optimization variables rather than fixed inputs. Starting from unoriented (or even randomly oriented) points, the algorithm alternates between surface reconstruction and normal updates derived from the reconstructed geometry until convergence. This “reconstruction-as-orientation” perspective eliminates the need for an external global orientation stage while preserving the scalability and robustness of Poisson reconstruction.

The iPSR formulation opened the door to a unified treatment of orientation and reconstruction, establishing reconstruction-as-orientation as a viable paradigm. Since 2022, a growing body of work has adopted this principle, treating point orientations as internal variables rather than fixed preprocessing outputs. Among these developments, my group has continued to advance the framework along both theoretical and computational dimensions. BIM (SIGGRAPH 2024) strengthens robustness by deriving a principled orientation objective from the Dirichlet energy of the generalized winding number field, expressed via a boundary integral formulation. This provides improved stability under noise, thin structures, and complex manifold geometry. DWG (TOG 2025) further advances scalability by introducing a fully parallel, solver-free formulation based on diffusing winding gradients. By eliminating large linear solves and nonlinear optimization, DWG enables reconstruction of 10–20 million point models in minutes and achieves one to two orders of magnitude speedups over prior global approaches. To date, DWG is among the most efficient algorithms reported for reconstructing watertight surfaces from unoriented point clouds. More recently, DiWR (arXiv 2026) extends DWG by incorporating per-point confidence coefficients and adaptive weights within an optimization framework. This refinement significantly improves robustness under heavy noise and corruption, enabling stable reconstruction even when input points exhibit substantial degradation.

Beyond watertight assumptions, my collaborators and I have developed a series of unsigned and medial implicit representations, including GeoUDF (ICCV 2023), DEUDF (AAAI 2025), LoSF-UDF (CVPR 2025), DACPO (SIGGRAPH 2025), VAD (arXiv 2025), and Q-MDF (TOG 2026), that remain well-defined even when inside/outside labeling is ill-posed or fundamentally ambiguous. However, the absence of sign information makes zero level-set extraction a notoriously challenging problem. Most existing methods rely heavily on gradient information, which becomes numerically unstable near the surface due to the non-differentiable behavior of unsigned distance fields at the zero level set. As a result, these approaches often produce surfaces with undesirable topological artifacts, such as small holes, spurious disconnected components, or unstable thin structures. To address this difficulty, we developed a series of optimization-based extraction methods, namely DCUDF (SIGGRAPH Asia 2023), DCUDF2 (TVCG 2025), and MIND (NeurIPS 2025), which formulate surface recovery as a globally consistent variational problem rather than a purely local gradient-based process. These methods substantially improve robustness and enable reliable reconstruction of open, thin, and even non-manifold geometries from unsigned implicit fields.

Collectively, this program transformed reconstruction from a brittle two-stage pipeline into a unified inference framework. It altered the problem definition, delivered scalable GPU-friendly implementations, and broadened applicability to open and non-manifold geometries that dominate real scanning scenarios. The orientation–reconstruction paradigm now underpins emerging research directions in digital geometry processing and large-scale 3D reconstruction.

Key Team Members

NTU Team. Research Fellows: Fei Hou (later Professor, University of Chinese Academy of Sciences), Jiangbei Hu (later Associate Professor, Dalian University of Technology), and Chen Zong (later Associate Professor, Nanjing University of Aeronautics and Astronautics); PhD Students: Jiayi Kong, Jiaze Li, Junkai Deng, Daisheng Jin, and Yanggeng Li.

Collaborators. University of Chinese Academy of Sciences: Fei Hou, Xuhui Chen, Zhuodong Li, and Cheng Xu; City University of Hong Kong: Junhui Hou and Siyu Ren; Beijing Normal University: Weizhou Liu; Texas A&M University: Wenping Wang.



Copyright Notice: These materials are provided for academic dissemination only. Copyright remains with the authors or respective copyright holders. Reproduction or redistribution may require prior permission.