logoalt Hacker News

AlphaSitetoday at 2:40 AM1 replyview on HN

I think for the same model wall time is probably a more intuitive metric; at the end of the day what you’re doing is renting GPU time slices.

Large outputs dominate compute time so are more expensive.

IMO input and output token counts are actually still a bad metric since they linearise non linear cost increases and I suspect we’ll see another change in the future where they bucket by context length. XL output contexts may be 20x more expensive instead of 10x.


Replies

nsomarutoday at 3:39 AM

They already bucket when context goes above 200k

show 1 reply