torch.cuda |
Replacement | Since |
|---|---|---|
torch.cuda.is_available() |
torch.accelerator.is_available() |
2.6 |
torch.cuda.device_count() |
torch.accelerator.device_count() |
2.6 |
torch.cuda.current_device() |
torch.accelerator.current_device_index() |
2.6 |
torch.cuda.set_device(dev) |
torch.accelerator.set_device_index(idx) |
2.6 |
torch.cuda.get_device_properties(i).total_memory |
torch.accelerator.get_memory_info(i)[1] |
2.9 |
backend="nccl" |
dist.get_default_backend_for_device(device) |
2.6 |
device.type == "cuda" / "cuda" literals |
is_accelerator_type(device.type) |
-- |
torch.device(f"cuda:{idx}") |
torch.device(accel_type, idx) |
-- |
All 13 sites have direct replacements. No NA gaps.
torch.cuda |
Replacement | Since | Notes |
|---|---|---|---|
torch.cuda.is_available() |
torch.accelerator.is_available() |
2.6 | |
torch.cuda.device_count() |
torch.accelerator.device_count() |
2.6 | |
torch.cuda.current_device() |
torch.accelerator.current_device_index() |
2.6 | |
torch.cuda.get_device_properties(i).total_memory |
torch.accelerator.get_memory_info(i)[1] |
2.9 | |
torch.cuda.max_memory_allocated(dev) |
torch.accelerator.max_memory_allocated(dev) |
2.9 | |
torch.cuda.reset_peak_memory_stats(dev) |
torch.accelerator.reset_peak_memory_stats(dev) |
2.9 | |
torch.cuda.Event() |
torch.Event(accel_type) |
2.9 | |
torch.cuda.current_stream() |
torch.accelerator.current_stream() |
2.9 | |
torch.cuda.stream(s) |
with stream: context manager |
2.10 | |
torch.cuda.Stream() |
torch.Stream(accel_type) |
2.11 | Needs newer torch |
torch.cuda.OutOfMemoryError |
torch.OutOfMemoryError |
2.5 | Device-agnostic base class; XPU raises this directly (verified on 2.10.0+xpu) |
device.type == "cuda" / "cuda" literals |
is_accelerator_type(device.type) |
-- | |
torch.device(f"cuda:{idx}") |
torch.device(accel_type, idx) |
-- |
20 of 22 sites have direct replacements. 2 sites need newer torch (Stream requires 2.11, stream context requires 2.10).