logoalt Hacker News

fragmedetoday at 3:46 AM3 repliesview on HN

For it to follow the instructions I had for it. Call me naive and stupid for thinking the 1M context window on the brand new model would actually, y'know, work.


Replies

queseratoday at 5:03 AM

That's a bit anthropomorphic though.

When LLMs become able to reflectively examine their own premises and weight paths, they will exceed the self-awareness of ordinary humans.

grey-areatoday at 6:54 AM

It doesn’t reason or explicitly follow instructions, it generates plausible text given a context.

Natfantoday at 3:59 AM

why would further chance at context pollution be a good thing? i feel like it is easier for data to get lost in a larger context