
tt-torch is a PyTorch2.0 and torch-mlir based front-end for tt-mlir.
The tt-torch repository is a PyTorch-based front end compiler that lets developers write standard PyTorch models as well as compile and run those models on Tenstorrent AI accelerators. tt-torch is a bridge between PyTorch models, MLIR dialects (Tenstorrent-specific IRs like ttir and ttgir), and low-level hardware execution on Tenstorrent chips.
Note: tt-torch
is transitioning to an experimental torch-xla
backend, which will become the primary path forward for this project instead of the current torch-mlir
approach. This project will be merged with tt-xla
in the future. This strategic change is based on the following factors:
torch-xla
now supports custom PJRT devices (which we have already developed intt-xla
)- We did not find a good way to implement SPMD using
torch-mlir
torch-mlir
lacks an eager mode and is fully ahead-of-time (AoT) compiled
Stay tuned for updates as we make this transition.
This repo is a part of Tenstorrent’s bounty program. If you are interested in helping to improve tt-forge, please make sure to read the Tenstorrent Bounty Program Terms and Conditions before heading to the issues tab. Look for the issues that are tagged with both “bounty” and difficulty level!