Update vocab_size for debugmodel_moe of qwen3 moe#1864
Update vocab_size for debugmodel_moe of qwen3 moe#1864wmhst7 wants to merge 1 commit intopytorch:mainfrom
Conversation
Setting vocab_size to 2048 without updating the tokenizer accordingly will cause a CUDA illegal memory access error. Maybe it's better to keep the model's vocab_size consistent with other model sizes.
|
Hi @wmhst7! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
| ), | ||
| # Qwen3-MoE models | ||
| "debugmodel_moe": Qwen3ModelArgs( | ||
| vocab_size=2048, |
There was a problem hiding this comment.
I think we are using the test tokenizer in the toml, which has less than 2048 tokens, so it should be fine?
https://github.com/pytorch/torchtitan/blob/main/torchtitan/models/qwen3/train_configs/qwen3_moe_debug.toml#L18
| # Qwen3-MoE models | ||
| "debugmodel_moe": Qwen3ModelArgs( | ||
| vocab_size=2048, | ||
| vocab_size=151936, |
There was a problem hiding this comment.
Thanks for pointing out! I set this to smaller size because we need to make the debugmodel_moe to fit in to github CI. I checked the toml file, and the tokenizer is less than 2048 tokens. Can you also attach the command how to reproduce the issue?
There was a problem hiding this comment.
OK, if this is for the CI run, that's reasonable. I have no issues on my end. I'll go ahead and close this PR for now.
|
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
Setting vocab_size to 2048 without updating the tokenizer accordingly will cause a CUDA illegal memory access error. Maybe it's better to keep the model's vocab_size consistent with other model sizes.