logoalt Hacker News

unsupp0rtedyesterday at 10:00 PM2 repliesview on HN

Even if LLMs didn't advance at all from this point onward, there's still loads of productive work that could be optimized / fully automated by them, at no worse output quality than the low-skilled humans we're currently throwing at that work.


Replies

pvab3yesterday at 10:53 PM

inference requires a fraction of the power that training does. According to the Villalobos paper, the median date is 2028. At some point we won't be training bigger and bigger models every month. We will run out of additional material to train on, things will continue commodifying, and then the amount of training happening will significantly decrease unless new avenues open for new types of models. But our current LLMs are much more compute-intensive than any other type of generative or task-specific model

show 3 replies
SchemaLoadyesterday at 10:09 PM

How much of the current usage is productive work that's worth paying for vs personal usage / spam that would just drop off after usage charges come in? I imagine flooding youtube and instagram with slop videos would reduce if users had to pay fair prices to use the models.

The companies might also downgrade the quality of the models to make it more viable to provide as an ad supported service which would again reduce utilisation.

show 2 replies