logoalt Hacker News

kay_otoday at 9:18 AM8 repliesview on HN

> However, there are lots of people in the world who live their whole life by vibing

Why are they often so desperate to lie and non-consensually harass others with their vibing rather than be honest about it? Why do they think they are "helping" with hallucinated rubbish that can't even build?

I use LLMs. It is not difficult to: ethically disclose your use, double check all of your work, ensure things compile without errors, not lie to others, not ask it to generate ten paragraphs of rubbish when the answer is one sentence, and respect the project's guidelines. But for so many people this seems like an impossible task.


Replies

automatic6131today at 9:33 AM

> Why do they think they are "helping" with hallucinated rubbish that can't even build?

Because they can't tell the difference between what the machine is outputting, and what people have built. All they see is the superficial resemblance (long lines of incomprehensbile code) and the reward that the people writing the code have got, and want that reward too.

show 1 reply
pjc50today at 9:48 AM

"Main character energy". What they're really doing is protecting their view of themselves as smart, and they're making a contribution for the sake of trying to perform being an OSS dev rather than out of need or altruism.

AI is absolutely terrible for people like that, as it's the perfect enabler.

StevePerkinstoday at 1:51 PM

> Why do they think they are "helping"

It's not about helping. It's about the feeling of clout. There are still plenty of people who look at Github profile activity to judge job candiates, etc. What gets measured gets repeated.

I believe that most of the ills of social media would disappear, if we eliminated the "like" and "upvotes" buttons and the view counts. Most open source garbage pull requests may likewise go away if contributions were somehow anonymous.

a96today at 1:05 PM

I think a lot of people who haven't given it more thought might see it as an arbitrary rule or even some kind of gatekeeping or discrimination. They haven't seen why people would want to not deal with the output.

This might not be helped by the fact that there are a lot of seemingly psychotic commenters attacking anything which might have touched an LLM or any generative model at some point. Their slur and expletive filled outbursts make every critical response look bad by vague association.

Having sensible explanations like in TFA for the rules and criticism clearly visible should help. But looking at other similar patterns, I'm not optimistic. And education isn't likely to happen since we're way past any eternal september.

drchickensaladtoday at 9:33 AM

You're asking why oil doesn't act like water. It's not really an impossible task, it's just not one they agree with.

ramon156today at 9:31 AM

It's the same as cheating in a game. You are given an """advantage""", so lying about it seems like the best option

MattDaEskimotoday at 11:12 AM

I wonder how many are account farming.

jcgrillotoday at 10:30 AM

LLMs are in this case enabling bad behavior, but open source software has always been vulnerable to this. Similarly, people who use LLMs to do this kind of thing are the kind of people who would have done it without LLMs but for the large effort it would have taken. We're just learning now how large that group is.

This is a good thing, it's an opportunity to make open source development processes robust to this kind of sabotage.

show 1 reply