logoalt Hacker News

metalliqazyesterday at 11:10 PM2 repliesview on HN

No it isn't. I really do not care what the LLM has to say. If a person has taken the (substantial) time necessary to fill the context with enough information that something interesting comes out, I would much rather they simply give me the inputs. The middleman is just digested Internet text. I've already got one of those on my end.


Replies

zahlmanyesterday at 11:52 PM

Related: https://blog.gpkb.org/posts/just-send-me-the-prompt/

(I could have sworn there was a popular HN submission a while back of this or a similar blog post, but damned if I can find it now.)

show 2 replies
andrewaylettyesterday at 11:44 PM

That does somewhat depend on the size of the context.

LLMs won't add information to context, so if the output is larger than the input then it's slop. They're much better at picking information out of context. If I have a corpus of information and prompt an extraction, the result may well contain more information than the prompt. It's not necessarily feasible to transfer the entire context, and also I've curated that specific result as suitably conveying the message I intend to convey.

This does all take effort.

My take is also that I am interested in what people say: I have priors for how worthwhile I expect it to be to read stuff written by various people, and I will update my priors when they give me things to read. If they give me slop, that's going to affect what I think of them, and I expect the same in return. I'm willing to work quite hard to avoid asking my colleagues to read or review slop.

show 1 reply