logoalt Hacker News

getnormalityyesterday at 10:00 PM4 repliesview on HN

You're presupposing an answer to what is actually the most interesting question in AI right now: does scaling continue at a sufficiently favorable rate, and if so, how?

The AI companies and their frontier models have already ingested the whole internet and reoriented economic growth around data center construction. Meanwhile, Google throttles my own Gemini Pro usage with increasingly tight constraints. The big firms are feeling the pain on the compute side.

Substantial improvements must now come from algorithmic efficiency, which is bottlenecked mostly by human ingenuity. AI-assisted coding will help somewhat, but only with the drudgery, not the hardest parts.

If we ask a frontier AI researcher how they do algorithmic innovation, I am quite sure the answer will not be "the AI does it for me."


Replies

asdffyesterday at 10:04 PM

Of course it continues. Look at the investment in hardware going on. Even with no algorithmic efficiency improvement that is just going to force power out of the equation just like a massive inefficient V8 engine with paltry horsepower per liter figures.

show 1 reply
p1esktoday at 3:33 AM

The AI companies and their frontier models have already ingested the whole internet

Has the frontier models been trained on the whole of youtube?

umairnadeem123today at 2:48 AM

this is the key tension imo. do you think labs are underinvesting in eval infra because scaling headlines are easier to sell?

also curious what would change your mind first: a clear algorithmic breakthrough, or just sustained cost/latency drops from systems work?

jwpapitoday at 1:17 AM

Honestly Im not even sure how much model improvement was in the last 12 months, or it was mainly harness improvement. It feels to me like I could’ve done the same stuff with 4, if I would be able to split every task into multiple subtasks with perfect prompts. So to me it could totally be that there is an inner harnessing happen that has been the recent improvements, but then I ask myself is this maybe the same with our own intelligence?