ProCache speeds up Diffusion Transformers with constraint-aware feature caching and selective computation, trading redundant work for quality-preserving inference.
2026/03/21📌 The code has been released.2025/11/08💥💥 ProCache is honored to be accepted by AAAI 2026!
This codebase is built upon the awesome ToCa and TaylorSeer. We are grateful to the authors for their open releases and for advancing token-wise feature caching and forecasting in diffusion transformers.
To run experiments with ProCache, follow the instructions in the model-specific markdown files in this repository (each backbone has its own guide):
- DiT-ProCache.md — DiT
- FLUX-ProCache.md — FLUX
- PixArt-ProCache.md — PixArt-α
- HunyuanVideo: We plan to add ProCache-style acceleration for HunyuanVideo (text-to-video) in a future release.
@inproceedings{cao2026procache,
title={ProCache: Constraint-Aware Feature Caching with Selective Computation for Diffusion Transformer Acceleration},
author={Cao, Fanpu and Chen, Yaofo and You, Zeng and Luo, Wei},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={40},
number={24},
pages={19862--19870},
year={2026}
}If you have any questions, please email fanpucao@gmail.com or chenyaofo@scut.edu.cn.
