| Date | Paper Title | Presenter | Notes | Published |
|---|---|---|---|---|
| 07.19 | Big Bird: Transformers for Longer Sequences | JinChao Yan | Slides | NeurIPS2020 |
| 06.28 | Reformer: The Efficient Transformer | JinChao Yan | Slides | ICLR2020 |
| 05.31 | A3: Accelerating Attention Mechanisms in Neural Networks with Approximation | JinChao Yan | Slides | HPCA2020 |
| 06.07 | SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning | JinChao Yan | Slides | HPCA2021 |
| 06.12 | Sanger: A Co-Design Framework for Enabling Sparse Attention using Reconfigurable Architecture | JinChao Yan | Slides | MICRO2021 |
| 07.26 | DOTA: detect and omit weak attentions for scalable transformer acceleration | JinChao Yan | Slides | ASPLOS2022 |
| 08.16 | FACT: FFN-Attention Co-optimized Transformer Architecture with Eager Correlation Prediction | JinChao Yan | Slides | ISCA2023 |
jocskywalker/Sparse-Transformer-Accelerator-PaperReadingList
Folders and files
| Name | Name | Last commit date | ||
|---|---|---|---|---|