users mainly use PyTorch and Jax and these days rarely write CUDA code.
however separately, installing drivers and the correct CUDA/CuDNN libraries is the responsibility of the user. this is sometimes slightly finicky.
with ROCm, the problem is that 1) PyTorch/Jax don't support it very well, for whatever reason which may be partly to do with the quality of ROCm frustrating PyTorch/Jax devs, 2) installing drivers and libraries is a nightmare. it's all poorly documented and constantly broken. 3) hardware support is very spotty and confusing.
PyTorch and Jax, good to know.
Why do they have ROCm/CUDA backends in the first place though? Why not just Vulkan?