Color-NeuS: Reconstructing Neural Implicit Surfaces with Color

International Conference on 3D Vision (3DV), 2024

  The reconstruction of object surfaces from multi-view images or monocular video is a fundamental issue in computer vision. However, much of the recent research concentrates on reconstructing geometry through implicit or explicit methods. In this paper, we shift our focus towards reconstructing mesh in conjunction with color. We remove the view-dependent color from neural volume rendering while retaining volume rendering performance through a relighting network. Mesh is extracted from the signed distance function (SDF) network for the surface, and color for each surface vertex is drawn from the global color network. To evaluate our approach, we conceived a in hand object scanning task featuring numerous occlusions and dramatic shifts in lighting conditions. We’ve gathered several videos for this task, and the results surpass those of any existing methods capable of reconstructing mesh alongside color. Additionally, our method’s performance was assessed using public datasets, including DTU, BlendedMVS, and OmniObject3D. The results indicated that our method performs well across all these datasets. Project Page

Color-NeuS_pipeline

Star History Chart

Recommended citation:

@inproceedings{zhong2024colorneus,
    title     = {Color-NeuS: Reconstructing Neural Implicit Surfaces with Color},
    author    = {Zhong, Licheng and Yang, Lixin and Li, Kailin and Zhen, Haoyu and Han, Mei and Lu, Cewu},
    booktitle = {International Conference on 3D Vision (3DV)},
    year      = {2024}
}