3D Deep Learning for Geometric Analysis and Understanding
← Back to Home
My research in 3D deep learning, initiated in 2020, was motivated by the remarkable success of deep learning in 2D vision tasks such as image recognition, segmentation, and generative modeling. While these advances demonstrated the power of data-driven representation learning, extending similar success to 3D domains presents fundamental challenges. Unlike images defined on regular grids with a global coordinate system, 3D data are typically unstructured and irregular, including point clouds, meshes, and implicit fields. They lack canonical parameterizations, consistent sampling patterns, and simple convolutional structures. To address this gap between 2D and 3D learning, my work integrates classical digital geometry processing and discrete differential geometry with modern deep learning techniques for geometric understanding and generation. Building upon these geometric foundations, we develop neural models for point cloud denoising, completion, semantic segmentation, and single- or multi-view reconstruction. A central theme of this research is embedding geometric structure into learning frameworks, enabling robustness, scalability, and faithful surface representation.
Neural Implicit Representations
Across my recent work on neural implicit representations, my goal has been to turn learned continuous fields, especially distance-based functions, into dependable geometric primitives: expressive enough to capture fine-scale real-world detail, and reliable enough to support downstream geometry processing. A major focus is learning unsigned distance fields (UDFs) directly from raw point clouds for high-fidelity reconstruction. GeoUDF (ICCV 2023) introduces geometry-guided learning to stabilize UDF prediction from noisy and incomplete scans; DEUDF (AAAI 2025) targets the persistent challenge of detail preservation; and LoSF-UDF (CVPR 2025) leverages local shape functions to reduce sensitivity to the training distribution and improve generalization. To bridge the gap from “a neural field” to usable geometry, especially when sign information is unavailable, we also develop principled discretization and extraction methods, including DCUDF (SIGGRAPH Asia 2023) and DCUDF2 (TVCG 2025), which improve the stability, efficiency, and accuracy of zero level-set recovery from learned UDFs. We then push these representations to more challenging regimes (e.g., non-manifold and multi-material interfaces in MIND, NeurIPS 2025) and extend neural fields into tools for shape computation and understanding: NeuroGF (NeurIPS 2023) enables fast geodesic distance and path queries, while Q-MDF (TOG 2026) exploits distance-field structure to robustly approximate and discretize neural medial axes. Most recently, SharpNet (arXiv 2026) advances neural representation by introducing controlled non-differentiability into MLPs, enabling faithful modeling of sharp features and piecewise-smooth behavior. Together, these works form a coherent agenda of geometry-aware neural fields with controllable regularity and reliable extraction, making implicit representations practical foundations for reconstruction, analysis, and shape reasoning.
- H. Niu, J. Deng, F. Hou, W. Wang, and Y. He. SharpNet: Enhancing MLPs to Represent Functions with Controlled Non-differentiability, arXiv:2601.19683, 2026.
- J. Kong, C. Zong, J. Luo, S. Xin, F. Hou, H. Jiang, C. Qian, and Y. He.
Quasi-Medial Distance Field (Q-MDF): A Robust Method for Approximating and Discretizing Neural Medial Axes, ACM Transactions on Graphics, accepted, 2026. (to be presented at ACM SIGGRAPH 2026)
- X. Chen, F. Hou, W. Wang, H. Qin, and Y. He.
MIND: Material Interface Generation from UDFs for Non-Manifold Surface Reconstruction, NeurIPS, 2025.
- J. Hu, Y. Li, F. Hou, J. Hou, Z. Zhang, S. Wang, N. Lei, and Y. He.
A Lightweight UDF Learning Framework for 3D Reconstruction based on Local Shape Functions, CVPR, 2025.
- C. Xu, F. Hou, W. Wang, H. Qin, Z. Zhang, and Y. He.
Details Enhancement in Unsigned Distance Field Learning for High-Fidelity 3D Surface Reconstruction, AAAI, 2025.
- X. Chen, F. Yu, F. Hou, W. Wang, Z. Zhang, and Y. He.
DCUDF2: Improving Efficiency and Accuracy in Extracting Zero Level Sets from Unsigned Distance Fields, IEEE Transactions on Visualization and Computer Graphics, Vol. 31, No. 10, pp. 9052-9065, 2025. (arXiv)
- S. Ren, J. Hou, X. Chen, Y. He, and W. Wang.
GeoUDF: Surface Reconstruction from 3D Point Clouds via Geometry-Guided Distance Representation, ICCV, 2023.
- F. Hou, X. Chen, W. Wang, H. Qin and Y. He.
Robust Zero Level-Set Extraction from Unsigned Distance Fields Based on Double Covering, ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH Asia '23), Vol. 42, No. 12, Article No. 247, 2023.
- Q. Zhang, J. Hou, Y. Adikusuma, W. Wang, and Y. He.
NeuroGF: A Neural Representation for Fast Geodesic Distance and Path Queries, NeurIPS, 2023.
Point Cloud Denoising
This line of work revisits point cloud denoising, a classical geometry processing problem, through the lens of learning, while keeping geometric structure and interpretability at the core. Starting from feature-preserving displacement regression (Pointfilter, TVCG 2021), we progressed to trained iterative refinement (IterativePFN, CVPR 2023), joint point/normal reasoning (PCDNF, TVCG 2024), and adaptive stopping to avoid over- and under-denoising (ASDN, AAAI 2025), with extensions to scene-scale inputs (3DMambaIPF, AAAI 2025). We further explored geometry-grounded and generative formulations, including implicit-field guidance (TVCG 2025), invertible latent-space denoising (CVPR 2024), adaptive latent alignment for unseen real noise (LaPDA, TVCG 2026), and deterministic residual diffusion guided by geometric displacements (TVCG 2026), alongside learned structural priors (AAAI 2026). We also summarized the broader landscape of deep learning-based denoising in a survey (arXiv 2025).
- Z. Liu, Z. Huang, M. Pan, and Y. He.
Deterministic Point Cloud Diffusion for Denoising, IEEE Transactions on Visualization and Computer Graphics, Vol. 32, No. 2, pp. 1822-1834, 2026. (PDF)
- P. Du, X. Wang, Z. Wu, X. Ru, X. Granier, and Y. He.
LaPDA: Latent-space Point Cloud Denoising with Adaptivity, IEEE Transactions on Visualization and Computer Graphics, Vol. 32, No. 2, pp. 1525-1539, 2026. (PDF)
- C. Guo, Z. Liu, and Y. He.
Guiding Point Cloud Denoising with Learned Structural Priors, AAAI, 2026.
- J. Wang, B. Fei, D.d.S. Edirimuni, Z. Liu, Y. He, and X. Lu. A Survey of Deep Learning-based Point Cloud Denoising, arXiv:2508.17011, 2025.
- Q. Zhou, W. Yang, B. Fei, J. Xu, R. Zhang, K. Liu, Y. Luo, Y. He.
3DMambaIPF: A State Space Model for Iterative Point Cloud Filtering via Differentiable Rendering, AAAI, 2025.
- C. Guo, W. Zhou, Z. Liu, and Y. He.
You Should Learn to Stop Denoising on Point Clouds in Advance, AAAI, 2025.
- J. Wang, X. Lu, M. Wang, F. Hou, and Y. He.
Learning Implicit Fields for Point Cloud Filtering, IEEE Transactions on Visualization and Computer Graphics, Vol. 31, No. 9, pp. 5408-5420, 2025. (PDF)
- Z. Liu, S. Zhan, Y. Zhao, Y. Liu, R. Chen, and Y. He.
PCDNF: Revisiting Learning-based Point Cloud Denoising via Joint Normal Filtering, IEEE Transactions on Visualization and Computer Graphics, Vol. 30, No. 8, pp. 5419-5436, 2024. (arXiv)
- A. Mao, B. Yan, Z. Ma, and Y. He.
Denoising Point Clouds in Latent Space via Graph Convolution and Invertible Neural Network, CVPR, 2024.
- D.d.S. Edirimuni, X. Lu, Z. Shao, G. Li, A. Robles-Kelly, and Y. He.
IterativePFN: True Iterative Point Cloud Filtering, CVPR, 2023.
- D. Zhang, X. Lu, H. Qin, and Y. He.
Pointfilter: Point Cloud Filtering via Encoder-Decoder Modeling, IEEE Transactions on Visualization and Computer Graphics, Vol. 27, No. 3, pp. 2015-2027, 2021. (arXiv)
Parameterization and Sampling
- Y. Zhao, Q. Zhang, J. Hou, J. Xia, W. Wang, and Y. He.
FlexPara: Flexible Neural Surface Parameterization, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 48, No. 3, pp. 2772-2789, 2026.(arXiv)
- Q. Zhang, J. Hou, W. Wang, and Y. He.
Flatten Anything: Unsupervised Neural Surface Parameterization, NeurIPS, 2024.
- Q. Zhang, J. Hou, Y. Qian, Y. Zeng, J. Zhang, and Y. He.
Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud Analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 45, No. 8, pp. 9726-9742, 2023. (arXiv)
- A. Mao, Z. Du, J. Hou, Y. Duan, Y.-J. Liu, and Y. He.
PU-Flow: A Point Cloud Upsampling Network with Normalizing Flows, IEEE Transactions on Visualization and Computer Graphics, Vol. 29, No. 12, pp. 4964-4977, 2023. (arXiv)
- Y. Qian, J. Hou, Q. Zhang, Y. Zeng, S. Kwong, and Y. He.
Task-Oriented Compact Representation of 3D Point Clouds via a Matrix Optimization-Driven Network, IEEE Transactions on Circuits and Systems for Video Technology, Vol. 33, No. 11, pp. 6981-6995, 2023. (arXiv)
- Q. Zhang, J. Hou, Y. Qian, A.B. Chan, J. Zhang, and Y. He.
ReGeoNet: Learning Regular Representations for Large-scale 3D Point Clouds, International Journal of Computer Vision, Vol. 130, No. 12, pp. 3100-3122, 2022. (PDF)
- Y. Qian, J. Hou, S. Kwong, and Y. He.
Deep Magnification-Flexible Upsampling over 3D Point Clouds, IEEE Transactions on Image Processing, Vol. 30, pp. 8354-8367, 2021.(arXiv)
- Y. Qian, J. Hou, S. Kwong, Y. He.
PUGeo-Net: A Geometry-Centric Network for 3D Point Cloud Upsampling, ECCV, 2020.
Topology Optimization and 3D Generation
- L. Du, J. Hu, S. Wang, Y. Jiang, N. Lei, Y. He, and Z. Luo.
Topo-GenMeta: Generative Design of Metamaterials based on Diffusion Model with Attention to Topology, Computer-Aided Design, Vol. 190, 103977, 2026. (PDF)
- J. Wang, Z. Lyu, B. Fei, J. Yao, Y. Zhang, B. Dai, D. Lin, Y. He, and Y. Wang.
SLIDE: A Unified Mesh and Texture Generation Framework with Enhanced Geometric Control and Multi-View Consistency, International Journal of Computer Vision, Vol. 133, pp. 3105-3128, 2025. (PDF)
- J. Hu, B. Fei, B. Xu, F. Hou, S. Wang, N. Lei, W. Yang, C. Qian, and Y. He.
TopoGen: Topology-Aware 3D Generation with Persistence Points, Computer Graphics Forum, 44: e70257, 2025. (arXiv)
- B. Fei, J. Wang, L. Bai, K. Liu, X. Xu, W. Yang, Y. Zhang, Y. He, D. Lin, Z. Lyu, and B. Dai.
GetMesh: A Controllable Model for High-Quality Mesh Generation and Manipulation, IEEE Transactions on Pattern Analysis and Machine Intelligence, accepted, 2025. (PDF)
- J. Hu, Y. He, B. Xu, S. Wang, N. Lei, Z. Luo.
IF-TONIR: Iteration-free Topology Optimization based on Implicit Neural Representations, Computer-Aided Design, Vol. 167, 103639, 2024. (PDF)
Motion, Gesture, and Correspondence
- Y. Adikusuma, Q. Huang, and Y. He. LiteGE: Lightweight Geodesic Embedding for Efficient Geodesic Computation and Non-Isometric Shape Correspondence, AAAI, 2026. (arXiv)
- M. Zhang, D. Jin, C. Gu, F. Hong, Z. Cai, J. Huang, C. Zhang, X. Guo, L. Yang, Y. He, and Z. Liu.
Large Motion Model for Unified Multi-Modal Motion Generation, ECCV, 2024.
- S. Ye, Y.-H. Wen, Y. Sun, Y. He, Z. Zhang, Y. Wang, W. He, and Y.-J. Liu.
Audio-Driven Stylized Gesture Generation with Flow-Based Model, ECCV, 2022.
- Y. Zeng, Y. Qian, Q. Zhang, J. Hou, Y. Yuan, and Y. He.
IDEA-Net: Dynamic 3D Point Cloud Interpolation via Deep Embedding Alignment, CVPR, 2022.
- Y. Zeng, Y. Qian, Z. Zhu, J. Hou, H. Yuan, and Y. He.
CorrNet3D: Unsupervised End-to-End Learning of Dense Correspondence for 3D Point Clouds, CVPR, 2021.
Single- or Multi-view Reconstruction
- J. Hu, H. Wang, B. Xu, N. Ding, Z. Lu, N. Lei, and Y. He.
AquaSplatting: A Hybrid 3D Representation for Robust Underwater Scene Reconstruction via Dual-Branch Rendering, AAAI, 2026.
- D. Jin and Y. He.
MonoCloth: Reconstruction and Animation of Cloth-Decoupled Human Avatars from Monocular Videos, AAAI, 2026. (arXiv)
- K. Liu, W. Yang, B. Fei, and Y. He.
Gaussian2Scene: 3D Scene Representation Learning via Self-supervised Learning with 3D Gaussian Splatting, ICASSP, 2026. (arXiv)
- Q. Zhou, Y. Gong, W. Yang, J. Li, Y. Luo, B. Xu, S. Li, B. Fei, and Y. He.
MGSR: 2D/3D Mutual-Boosted Gaussian Splatting for High-Fidelity Surface Reconstruction under Various Light Conditions, ICCV, 2025.
- J. Song, Z. Ye, Q. Zhou, W. Yang, B. Fei, J. Xu, Y. He, and W. Ouyang. Reflections Unlock: Geometry-Aware Reflection Disentanglement in 3D Gaussian Splatting for Photorealistic Scenes Rendering, arXiv:2507.06103, 2025.
- J. Deng, H. Niu, J. Li, F. Hou, and Y. He.
UNIS: A Unified Framework for Achieving Unbiased Neural Implicit Surfaces in Volume Rendering, ICCV, 2025.
- J. Kong, X. Song, S. Huai, B. Xu, J. Luo, and Y. He.
Do Not DeepFake Me: Privacy-Preserving Neural 3D Head Reconstruction Without Sensitive Images, AAAI, 2025.
- Y. Dai, Q. Wang, J. Zhu, D. Xi, Y. Huo, C. Qian, and Y. He.
Inverse Rendering with Multi-Bounce Path Tracing and Reservoir Sampling, ICLR, 2025.
- D. Jin, J. Hu, B. Xu, Y. Dai, C. Qian, and Y. He.
SFDM: Robust Decomposition of Geometry and Reflectance for Realistic Face Rendering from Sparse-View Images, CVPR, 2025.
- B. Fei, J. Xu, R. Zhang, Q. Zhou, W. Yang, and Y. He.
3D Gaussian Splatting as New Era: A Survey, IEEE Transactions on Visualization and Computer Graphics, Vol. 31, No. 8, pp. 4429-4449, 2025. (arXiv)
- Y. Xu, G. Hou, J. Hu, T. Ren, X. Wang, Y. Zhang, C. Qian, F. Hou, and Y. He.
Physics and Geometry-Augmented Neural Implicit Surfaces for Rigid Bodies, Computer Aided Geometric Design, Vol. 119, 102437, 2025. (PDF)
- B. Xu, J. Hu, J. Li, and Y. He. GSurf: 3D Reconstruction via Signed Distance Fields with Direct Gaussian Supervision, arXiv:2411.15723, 2024.
- H. Zhang, J. Deng, X. Chen, F. Hou, W. Wang, H. Qin, C. Qian, and Y. He.
From Transparent to Opaque: Rethinking Neural Implicit Surfaces with α-NeuS, NeurIPS, 2024.
- Y. Hu, S. Ye, W. Zhao, M. Lin, Y. He, Y.-H. Wen, Y. He, and Y.-J. Liu.
O^2-Recon: Completing 3D Reconstruction of Occluded Objects in the Scene with a Pre-Trained 2D Diffusion Model, AAAI, 2024.
- J. Deng, F. Hou, X. Chen, W. Wang, and Y. He.
A Novel Two-Stage UDF Learning Method for Robust Non-watertight Model Reconstruction from Multi-View Images, CVPR, 2024.
- B. Xu, J. Hu, F. Hou, K.-Y. Lin, W. Wu, C. Qian, and Y. He.
Parameterization-Driven Neural Surface Reconstruction for Object-Oriented Editing in Neural Rendering, ECCV, 2024.
- J. Li, Z. Wen, L. Zhang, J. Hu, F. Hou, Z. Zhang and Y. He.
GS-Octree: Octree-based 3D Gaussian Splatting for Robust Object-Level 3D Reconstruction Under Strong Lighting, Computer Graphics Forum, 43: e15206, 2024. (arXiv)
- J. Li, L. Zhang, J. Hu, Z. Zhang, H. Sun, G. Song, and Y. He.
Real-Time Volume Rendering with Octree-Based Implicit Surface Representation, Computer-Aided Geometric Design, Vol. 111, 102322, 2024. (PDF)
- A. Mao, C. Dai, Q. Liu, J. Yang, L. Gao, Y. He, and Y.-J. Liu.
STD-Net: Structure-Preserving and Topology-Adaptive Deformation Network for Single-View 3D Reconstruction, IEEE Transactions on Visualization and Computer Graphics, Vol. 29, No. 3, pp. 1785-1798, 2023. (arXiv)
- B. Xu, J. Zhang, K.-Y. Lin, C. Qian, and Y. He. Deformable Model Driven Neural Rendering for High-Fidelity 3D Reconstruction of Human Heads Under Low-View Settings, ICCV, 2023.
Completion, Segmentation, and Understanding
- H. Tian, Z. Jiang, S. Song, S. Zhong, Z. Liu, and Y. He.
Bridging Geometry and Semantics for 3D Point Cloud Instance Segmentation, Computational Visual Media, accepted, 2026.
- H. Xiao, W. Kang, Y. Guo, H. Liu, and Y. He.
Enhanced Geometry and Semantics for Camera-based 3D Semantic Scene Completion, IEEE Transactions on Image Processing, Vol. 35, pp. 1-13, 2026. (PDF)
- B. Fei, Y. Li, W. Yang, L. Ma, and Y. He.
Towards Unified Representation of Multi-Modal Pre-training for 3D Processing, IEEE Transactions on Visualization and Computer Graphics, Vol. 32, No. 2, pp. 2216-2229, 2026. (PDF)
- B. Fei, T. Luo, W. Yang, L. Liu, R. Zhang, and Y. He.
Curriculumformer: Taming Curriculum Pre-Training for Enhanced 3D Point Cloud Understanding, IEEE Transactions on Neural Networks and Learning Systems, Vol. 36, No. 4, pp. 7316-7330, 2025. (PDF)
- Y. Feng, H. Dai, G. Wei, L. Ma, P. Wang, Y. Zhou, and Y. He.
D-FRAME: Direction-Field-Based Wireframe Extraction for Complex CAD Models, IEEE Transactions on Visualization and Computer Graphics, Vol. 31, No. 12, pp. 10595-10608, 2025. (PDF)
- B. Fei, J. Xu, Y. Li, W. Yang, Q. Zhou, L. Liu, T. Luo, and Y. He.
Self-supervised Learning for Pre-Training 3D Point Clouds: A Survey, Computational Visual Media, accepted, 2025. (arXiv)
- H. Xiao, W. Kang, H. Liu, Y. Li, and Y. He.
Semantic Scene Completion via Semantic-aware Guidance and Interactive Refinement Transformer, IEEE Transactions on Circuits and Systems for Video Technology, Vol. 35, No. 5, pp. 4212-4225, 2025. (PDF)
- A. Mao, Y. Tang, J. Huang, and Y. He.
DMF-Net: Image-Guided Point Cloud Completion with Dual-Channel Modality Fusion and Shape-Aware Upsampling Transformer, AAAI, 2025.
- Q. Sun, C. Fang, S. Liu, Y. Sun, Y. Shang, and Y. He.
PolyGraph: A Graph-based Method for Floorplan Reconstruction from 3D Scans, IEEE Transactions on Visualization and Computer Graphics, Vol. 31, No. 10, pp. 7350-7362, 2025. (PDF)
- H. Xiao, Y. He, H. Liu, W. Kang, and Y. Li.
Point Cloud Completion via Self-Projected View Augmentation and Implicit Field Constraint, IEEE Transactions on Circuits and Systems for Video Technology, Vol. 34, No. 11, pp. 11564-11578, 2024. (PDF)
Copyright Notice: These materials are provided for academic dissemination only. Copyright remains with the authors or respective copyright holders. Reproduction or redistribution may require prior permission.