-
-
Notifications
You must be signed in to change notification settings - Fork 17.4k
Fix: Managed the import of torch.amp to be compatible with all pytorch versions #13487
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Fix: Managed the import of torch.amp to be compatible with all pytorch versions #13487
Conversation
|
All Contributors have signed the CLA. ✅ |
|
👋 Hello @paraglondhe098, thank you for submitting a 🚀 PR to
To reproduce and understand the issue you're addressing more clearly, a Minimum Reproducible Example (MRE) demonstrating the AMP warning context would be useful for the reviewers. If you can provide an example of the exact conditions under which the error occurs (e.g., a specific PyTorch version, configuration details, or dataset), it will aid in validation and testing. For more information, refer to our Contributing Guide. If questions come up or further clarification is needed, feel free to add comments here. This looks like a solid and impactful improvement - thank you for contributing to the community! 🚀✨ |
|
I have read the CLA Document and I sign the CLA |
|
👋 Hello there! We wanted to let you know that we've decided to close this pull request due to inactivity. We appreciate the effort you put into contributing to our project, but unfortunately, not all contributions are suitable or aligned with our product roadmap. We hope you understand our decision, and please don't let it discourage you from contributing to open source projects in the future. We value all of our community members and their contributions, and we encourage you to keep exploring new projects and ways to get involved. For additional resources and information, please see the links below:
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐ |
I have read the CLA Document and I sign the CLA
Changes made
While I was training YOLO V5, I encountered warning :
To address this, I updated the import in train.py using a try-except block for compatibility with all PyTorch versions in requirements.txt:
Additionally, the variable amp (a boolean indicating whether to use automated precision/mixed precision training) was renamed to use_amp for clarity, since amp is also the module name.
🛠️ PR Summary
Made with ❤️ by Ultralytics Actions
🌟 Summary
Improves AMP (Automatic Mixed Precision) integration with enhanced compatibility and functionality.
📊 Key Changes
torch.cuda.ampiftorch.ampis not available (ensures compatibility across PyTorch versions).ampvariable withuse_ampfor better clarity and consistency.GradScaler) and automatic casting (autocast), for seamless device type support (e.g., CPU, GPU).🎯 Purpose & Impact
torch.amp.