-
Notifications
You must be signed in to change notification settings - Fork 7.8k
Description
If you do not know the root cause of the problem / bug, and wish someone to help you, please
post according to this template:
Instructions To Reproduce the Issue:
-
run demo
python demo/demo.py --config-file configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml --input input.jpg [--other-options] --opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl -
cuda 10.0
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.43 Driver Version: 418.43 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce RTX 208... Off | 00000000:04:00.0 Off | N/A |
| 31% 34C P0 55W / 250W | 0MiB / 10989MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce RTX 208... Off | 00000000:06:00.0 Off | N/A |
| 31% 34C P0 46W / 250W | 0MiB / 10989MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 GeForce RTX 208... Off | 00000000:07:00.0 Off | N/A |
| 31% 34C P0 51W / 250W | 0MiB / 10989MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 3 GeForce RTX 208... Off | 00000000:08:00.0 Off | N/A |
| 31% 33C P0 60W / 250W | 0MiB / 10989MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 4 GeForce RTX 208... Off | 00000000:0C:00.0 Off | N/A |
| 31% 35C P0 60W / 250W | 0MiB / 10989MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 5 GeForce RTX 208... Off | 00000000:0D:00.0 Off | N/A |
| 30% 30C P0 50W / 250W | 0MiB / 10989MiB | 1% Default |
+-------------------------------+----------------------+----------------------+
| 6 GeForce RTX 208... Off | 00000000:0E:00.0 Off | N/A | -
what you observed (including the full logs):
return _C.nms(boxes, scores, iou_threshold)
RuntimeError: CUDA error: no kernel image is available for execution on the device (nms_cuda at /tmp/pip-req-build-9d9zypi6/torchvision/csrc/cuda/nms_cuda.cu:127)
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) + 0x6d (0x7f3cd35c7e7d in /home/azuryl/anaconda3/envs/detectron2p37/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: nms_cuda(at::Tensor const&, at::Tensor const&, float) + 0x8d1 (0x7f3ca5dbaece in /home/azuryl/anaconda3/envs/detectron2p37/lib/python3.7/site-packages/torchvision/_C.so)
frame #2: nms(at::Tensor const&, at::Tensor const&, float) + 0x183 (0x7f3ca5d7eed7 in /home/azuryl/anaconda3/envs/detectron2p37/lib/python3.7/site-packages/torchvision/_C.so)
frame #3: + 0x79cf5 (0x7f3ca5d98cf5 in /home/azuryl/anaconda3/envs/detectron2p37/lib/python3.7/site-packages/torchvision/_C.so)
frame #4: + 0x765b0 (0x7f3ca5d955b0 in /home/azuryl/anaconda3/envs/detectron2p37/lib/python3.7/site-packages/torchvision/_C.so)
frame #5: + 0x70d1e (0x7f3ca5d8fd1e in /home/azuryl/anaconda3/envs/detectron2p37/lib/python3.7/site-packages/torchvision/_C.so)
frame #6: + 0x70fc2 (0x7f3ca5d8ffc2 in /home/azuryl/anaconda3/envs/detectron2p37/lib/python3.7/site-packages/torchvision/_C.so)
frame #7: + 0x5be4a (0x7f3ca5d7ae4a in /home/azuryl/anaconda3/envs/detectron2p37/lib/python3.7/site-packages/torchvision/_C.so)
frame #59: __libc_start_main + 0xf0 (0x7f3d0c2ca830 in /lib/x86_64-linux-gnu/libc.so.6)
Expected behavior:
If there are no obvious error in "what you observed" provided above,
please tell us the expected behavior.
If you expect the model to converge / work better, note that we do not give suggestions
on how to train a new model.
Only in one of the two conditions we will help with it:
(1) You're unable to reproduce the results in detectron2 model zoo.
(2) It indicates a detectron2 bug.
Environment:
Please paste the output of python -m detectron2.utils.collect_env
.
If detectron2 hasn't been successfully installed, use python detectron2/utils/collect_env.py
.
If your issue looks like an installation issue / environment issue,
please first try to solve it yourself with the instructions in
https://github.com/facebookresearch/detectron2/blob/master/INSTALL.md#common-installation-issues