logoalt Hacker News

DiabloD3last Saturday at 11:45 PM5 repliesview on HN

CUDA isn't really used for new code. Its used for legacy codebases.

In the LLM world, you really only see CUDA being used with Triton and/or PyTorch consumers that haven't moved onto better pastures (mainly because they only know Python and aren't actually programmers).

That said, AMD can run most CUDA code through ROCm, and AMD officially supports Triton and PyTorch, so even the academics have a way out of Nvidia hell.


Replies

smokellast Sunday at 8:45 PM

> CUDA isn't really used for new code.

I don't think this is particularly correct, or at least worded a bit too strongly.

For Nvidia hardware, CUDA just gives the best performance, and there are many optimized libraries that you'd have to replace as well.

Granted, new ML frameworks tend to be more backend agnostic, but saying that CUDA is no longer being used, seems a bit odd.

sexeriy237last Sunday at 7:50 AM

If you're not doing machine code by hand, you're not a programmer

show 1 reply
komali2last Sunday at 8:07 AM

What are non legacy codebases using, then?

show 1 reply
dganlast Sunday at 7:57 AM

sooo what's the successor of cuda?

show 1 reply
TiredOfLifelast Sunday at 7:04 AM

ROCm doesn't work on this device

show 2 replies