logoalt Hacker News

rhubarbtree01/22/20252 repliesview on HN

> My sense anecdotally from within the space is yes people are feeling like we most likely have a "straight shot" to AGI now

My problem with this is that people making this statement are unlikely to be objective. Major players are in fundraising mode, and safety folks are also incentivised to be subjective in their evaluation.

Yesterday I repeatedly used OpenAI’s API to summarise a document. The first result looked impressive. However, comparing repeated results revealed that it was missing major points each time, in a way a human would certainly not. In the surface the summary looked good, but careful evaluation indicated a lack of understanding or reasoning.

Don’t get me wrong, I think AI is already transformative, but I am not sure we are close to AGI. I hear a lot about it, but it doesn’t reflect my experience in a company using and building AI.


Replies

dauhak01/22/2025

Yeah obviously motivations are murky and all over the place, no one's free of bias. I'm not taking a strong stance on whether they're right or not or how much of it is motivated reasoning, I just think at least quite a bit is genuine (I'm mainly basing this off researchers I know who have a track record of being very sober and "boring" rather than the flashy Altman types)

To your point, yeah the models still suck in some surprising ways, but again it's that thing of they're the worst they're ever going to be, and I think in particular on the reasoning issue a lot of people are quite excited that RL over CoT is looking really really promising for this.

I agree with your broader point though that I'm not sure how close we are and there's an awful lot of noise right now

show 1 reply
sroussey01/25/2025

Summarizing is quite difficult. You need to keep the salient points and facts.

If anyone has experience on getting this right, I would like to know how you do it.