logoalt Hacker News

whoevercarestoday at 3:47 AM0 repliesview on HN

Absolutely. LLM inference is still a greenfield — things like overlap scheduling and JIT CUDA kernels are very recent. We’re just getting started optimizing for modern LLM architectures, so cost/perf will keep improving fast.