logoalt Hacker News

skippyboxedherotoday at 6:48 PM4 repliesview on HN

GPT-2, o1, Opus...been here so many times. The reason they do this is because they know it works (and they seem to specifically employ credulous people who are prone to believe AGI is right around the corner). There haven't been significant innovations, the code generated is still not good but the hype cycle has to retrigger.

I remember when OpenAI created the first thinking model with o1 and there were all these breathless posts on here hyperventilating about how the model had to be kept secret, how dangerous it was, etc.

Fell for it again award. All thinking does is burn output tokens for accuracy, it is the AI getting high on its own supply, this isn't innovation but it was supposed to super AGI. Not serious.


Replies

chaos_emergenttoday at 7:46 PM

> All thinking does is burn output tokens for accuracy

“All that phenomenon X does is make a tradeoff of Y for Z”

It sounds like you’re indignant about it being called thinking, that’s fine, but surely you can realize that the mechanism you’re criticizing actually works really well?

b65e8bee43c2ed0today at 7:13 PM

>I remember when OpenAI created the first thinking model with o1 and there were all these breathless posts on here hyperventilating about how the model had to be kept secret, how dangerous it was, etc.

I've read that about Llama and Stable Diffusion. AI doomers are, and always have been, retarded.

simianwordstoday at 7:13 PM

Incredible that people still think like this.

show 1 reply
vonneumannstantoday at 6:54 PM

Lol you haven't used a model since GPT2 is what it sounds like.

show 1 reply