logoalt Hacker News

pizzathymeyesterday at 11:04 PM6 repliesview on HN

The key thing here is not whether it's AI. The key thing is quality and signal. No one wants to read to a low quality human comment either.

If the AI output was actually better than talking to a real human, more useful, more concise, serving the job to be done, then no one would have a problem with it. In fact they would appreciate it. That future is not here in many areas.

The problem is people are wielding AI right now and either [a] the models they are using are not good enough, [b] they aren't being given enough context, or [c] they are deployed in a way that makes it sloppy

(Insert joke about whether this comment is AI. It's not, but joke away)


Replies

cbsmithtoday at 6:23 AM

Yup. The comment about the LLM generated PRs is telling. The complain is the LLM generated PRs don't describe design intent. You know how to avoid that? Tell the LLM to provide intent, and if need be, give it the intent. A PR which doesn't capture intent should be categorically rejected and the parties responsible should expect to never get a PR through without it.

WD-42yesterday at 11:15 PM

No. It doesn’t matter how good an llm model is. If a person has something to say and they can give the llm enough context to say it well, they should just write it themselves. Theres 0 reason to bring a llm into it. Doing so simply makes your writing less trustworthy because as a reader I don’t know if what I’m reading is genuine from the writer or simply average of all texts filler.

chrysopraceyesterday at 11:19 PM

I disagree. If my colleague can't be bothered to write a PR comment themselves then I can't be bothered to read it. If I can gain the same insights from interfacing an LLM directly then there's no point in this intermediary dance.

metalliqazyesterday at 11:10 PM

No it isn't. I really do not care what the LLM has to say. If a person has taken the (substantial) time necessary to fill the context with enough information that something interesting comes out, I would much rather they simply give me the inputs. The middleman is just digested Internet text. I've already got one of those on my end.

show 2 replies
schrectacularyesterday at 11:12 PM

Slop-y indeed

jibaltoday at 12:11 AM

> The key thing here is not whether it's AI. The key thing is quality and signal. No one wants to read to a low quality human comment either.

This is so obviously true to intelligent people (and is even a point made in the article) ... it's sad that you're getting downvoted.

The OP wrote

> When I talk to a person, I expect that they are telling me things out of their head — that they have developed a belief and are trying to communicate it to me.

But when I'm having a conversation about a subject (rather than with a friend, partner, or other person with whom I have a relationship and the conversation is part of the having of that relationship) I don't care what is in that person's head, I care about the truth of the matter, so I'm far more interested in their sources, their logic and the validity of same. Unless I'm a psychologist doing a survey, why should I care about some random person's beliefs? Since I'm a truth seeker, I care about their arguments, and of course the quality of their arguments is of paramount importance. I appreciate people who can back up their arguments, and LLM summaries that are chock full of facts gleaned from the massive training data that includes a vast amount of human knowledge are fully appreciated--while being aware that hallucination is possible so I often double check things regardless of the source. OTOH, the pushback to this is from people I consider worse than irrelevant--they not only are willfully ignorant but they reject knowledge seeking for irrational ideological reasons. (I myself see the LLM industry to be extremely problematic, but as long as LLMs exist and are capable of producing quality signal--which is the given here--then I will use them.)

This whole page is illustrative: so many people are telling us things out of their head ... that have nothing to do with the article because they didn't read it. So they blather about their beliefs and opinions about support--because that's how they interpreted the title. These comments are useless.

P.S.

> If all you care about is the facts, and not the other’s relationship to them, why engage with a person at all?

I already said: I'm a truth seeker. Also I sometimes seek to persuade people in public forums--and not necessarily the person I'm corresponding with. And missing is any reason why I should care about internet randos' relationships with their beliefs, other than as a psychological survey.

> You could query a LLM for whatever subject, argument or counterpoint you wish.

I can do better, and can do more, as noted.

> Besides, your hypothetical summaries chock full of facts don’t exist, at least not yet. Most LLM summaries are chock full of filler, thus the name slop, thus why us “ignorant” people hate reading it.

This is an example of a belief that is not supported by the facts--if it's even a belief, which I doubt--it's emo ideology. Putting "ignorant" in quotes doesn't falsify it, and I have never encountered a remotely intelligent person who "hates" reading LLM summaries--this is in the same category as people who reject Wikipedia citations because "anyone can edit it". This person unintelligently reduces all LLM output to "slop"--maybe he should try actually reading the head article, which has a quite different take.

show 1 reply