Skip to content

ccvl/DINeMo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DINeMo

Official implementation of DINeMo, from the following paper:

DINeMo: Learning Neural Mesh Models with no 3D Annotations. CVPR 2025 C3DV Workshop.
Weijie Guo, Guofeng Zhang, Wufei Ma, and Alan Yuille
Johns Hopkins University
[arXiv] [Project Page]

Data Structure

Under the data folder, the following structure should be maintained:

data/
├── train/              # Folder containing training data
│   ├── images/         # Folder containing input images for training
│   ├── seg_masks/      # Folder containing segmentation masks for training
│   ├── correspondence/ # Folder containing correspondence data for training
│   └── gt_correspondence/ # (Optional) Folder containing ground truth correspondence data for training
├── test/               # Folder containing testing data
│   ├── images/         # Folder containing input images for testing
│   ├── seg_masks/      # Folder containing segmentation masks for testing
│   ├── correspondence/ # Folder containing correspondence data for testing
│   └── gt_correspondence/ # (Optional) Folder containing ground truth correspondence data for testing
└── mesh/               # Folder containing mesh data
    ├── CAD/            # Folder containing CAD models
    └── color/          # Folder containing color information

Each low-level subfolder contains additional subdirectories, one for each category.

Training preparation

  1. Prepare mask. Prepare image masks via Grounded-Segment-Anything, and store the masks in .npy format.

  2. Prepare mesh. Place each mesh in the appropriate subfolder according to its category. To enable visualization during training and testing, generate vertex colors using the following command:

python visualize/color_vertices.py --mesh_folder /path/to/mesh --cate car
  1. Prepare pseudo-correspondence. Generate pseudo-correspondences for zero-3D-annotation training. Outputs are saved as .npy files, where each pixel corresponds to a mesh vertex index.
python models/correspondence.py --data_folder /path/to/data --cate car

Training

python train_nemo.py --train True --config /path/to/config

Inference

python train_nemo.py --train False --config /path/to/config

Citation

If you find this repository helpful, please consider citing:

@article{guo2025dinemo,
  title={DINeMo: Learning Neural Mesh Models with no 3D Annotations},
  author={Guo, Weijie and Zhang, Guofeng and Ma, Wufei and Yuille, Alan},
  journal={arXiv preprint arXiv:2503.20220},
  year={2025}
}

About

Official implementation of DINeMo (CVPR 2025 C3DV Workshop).

Resources

Stars

Watchers

Forks

Contributors 2

  •  
  •  

Languages