(Clean, centered layout for clarity and style.)
- Paper & Poster
- Supplemental Materials
- Code & Checkpoint
- Environment
- Usage
- Citation & If This Helps, Please Cite
- Acknowledgement
- Other / TODO
Paper:
Trap Attention: Monocular Depth Estimation With Manual Traps — CVPR 2023 :contentReference[oaicite:0]{index=0}
Poster & Supplement & Related Materials:
- Poster / Virtual Session: available via CVPR 2023 Poster Page :contentReference[oaicite:1]{index=1}
- Supplemental PDF (network details / extra results): accessible here – Supplemental File :contentReference[oaicite:2]{index=2}
(If you want, you can download the poster and place under ./docs/ or ./posters/ and insert display.)
- Implementation repository: ICSResearch/TrapAttention on GitHub :contentReference[oaicite:3]{index=3}
- Pretrained / checkpoint models: (as you provided earlier) Google Drive link.
Google Drive: https://drive.google.com/drive/folders/1kIXg9UP0cVWUq_7Pq20JT9_RyR-PjvkS?usp=sharing
Python 3.8• PyTorch 1.7.1 (or greater, as long as compatible) (You may install other dependencies per requirements.txt in the repo.)
Clone the repo, download the checkpoint, and run as follows (for example):
git clone https://github.com/ICSResearch/TrapAttention.git
cd TrapAttentionpip install -r requirements.txt
your_run_script.py --config configs/your_config.yaml # adjust as neededMake sure CUDA / GPU memory is sufficient if you run high‑res inputs or large batch size.
📚 Citation – If This Code Helps, Please CiteIf you use this code (or parts of it) in your work, please cite:bibtex
@InProceedings{Ning_2023_CVPR,
author = {Chao Ning and Hongping Gan},
title = {Trap Attention: Monocular Depth Estimation With Manual Traps},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2023},
pages = {5033–5043}
}Thanks to the following outstanding works / libraries / communities:• Transformers / Vision‑Transformer backbones used in encoder• The community maintaining open‑source depth‑estimation toolboxes• All contributors and testers who reported bugs or improvements
/ TODO• (Optional) Add inference examples & sample outputs in /examples/• (Optional) Add visualization of depth maps (RGB → depth) in README / docs• (Optional) Add evaluation scripts and result tables