Backend impacted
The PyTorch implementation
Operating system
Linux
Hardware
GPU with CUDA
Description
Summary
Running the provided Docker setup fails on NVIDIA GeForce RTX 5090 (Blackwell, sm_120).
The container exits with a PyTorch warning about unsupported CUDA capability and then crashes with:
RuntimeError: CUDA error: no kernel image is available for execution on the device.
Environment
- Host OS: Windows (Docker Desktop + WSL2)
- Container base image: nvcr.io/nvidia/cuda:12.4.1-runtime-ubuntu22.04
- GPU: NVIDIA GeForce RTX 5090 (Blackwell, compute capability sm_120)
- Docker Compose: uses
driver: nvidia and capabilities: [gpu]
Steps to reproduce
- Clone repo
docker compose up -d --build
- Observe container logs and crash
Expected behavior
Container starts successfully and uses the GPU for inference.
Actual behavior
Container exits during model initialization. Log shows:
NVIDIA GeForce RTX 5090 with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_70 sm_75 sm_80 sm_86 sm_90.
...
RuntimeError: CUDA error: no kernel image is available for execution on the device
### Extra information
N/A
### Environment
Docker compose
Backend impacted
The PyTorch implementation
Operating system
Linux
Hardware
GPU with CUDA
Description
Summary
Running the provided Docker setup fails on NVIDIA GeForce RTX 5090 (Blackwell, sm_120).
The container exits with a PyTorch warning about unsupported CUDA capability and then crashes with:
RuntimeError: CUDA error: no kernel image is available for execution on the device.Environment
driver: nvidiaandcapabilities: [gpu]Steps to reproduce
docker compose up -d --buildExpected behavior
Container starts successfully and uses the GPU for inference.
Actual behavior
Container exits during model initialization. Log shows: