logoalt Hacker News

Stop Sloppypasta

53 pointsby namnnumbrtoday at 5:25 PM20 commentsview on HN

Comments

madroxtoday at 10:45 PM

I find that I don't have a lot of sympathy for people angry at this type of behavior, even though I share the disdain for someone else's AI output. The people doing this kind of thing are not the kind of people to be reading this manifesto. We've been creating bait content for a long time, and humans have never been given the tools to manage this in any sophisticated fashion. The internet was not a bastion of high quality content or discourse pre-AI. We need better tools as content consumers to filter content. Ironically, AI is what may actually make this possible.

I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.

I suspect the endgame of this is probably the fulfillment of Dead Internet Theory, where it's just AI creating content and AI browsing the internet for content, and users will never engage with it directly. That person who spent 10 seconds getting AI to write something will be consumed by AI as well, only to be surfaced to you when you ask the AI to summon and summarize.

And if that fills people with horror at the inefficiency of it all, well, like I said, it isn't like the internet was a bastion of efficiency before. We smiled and laughed for years that all of this technology and power is just being used to share cat videos.

show 3 replies
OptionOfTtoday at 11:04 PM

It's very weird how many people take the output of ChatGPT/Gemini/Claude as gospel, and don't question it at all.

It's also very impolite to dump 5 pages of text on someone, because now you're asking _them_ to validate it.

When I ask a question in Slack I want people's input. Part of my work is also consulting the GPTs and see if the information makes sense.

And it shows up the most with people who answer questions in domains they're not a 100% familiar with.

simianwordstoday at 11:01 PM

I've been thinking about this, what if AI runs autonomously and finds things to criticise that are factually incorrect?

It is easy to do in social media because the context is global but in enterprises it is a bit harder.

Something like "flagged as very likely untrue by AI" is something I would really appreciate.

I see many posts and comments throughout the internet that can easily be dispelled by a single LLM prompt. But this should only be used when the confidence is really high.

rrr_oh_mantoday at 11:10 PM

It's ironic, because the site has all the hallmarks of an LLM generated website.

incognito124today at 10:44 PM

Related: https://news.ycombinator.com/item?id=44617172

show 1 reply
stabblestoday at 10:28 PM

I wouldn't call "ChatGPT says" an equivalent of LMGTFY. The former is people in awe with the oracle, the latter is people tired of having to look something up for others.

show 1 reply
uniq7today at 10:32 PM

This article's proposal for stopping sloppypasta is to convince the people who does it to stop doing it, but I am more interested on what someone who receives sloppypasta can do.

How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?

I've never did that so far because I feel like I am either exposing their serious lack of professionalism or, if I wrongly assumed it was AI, I am plainly telling them that their work looks like bad AI slop.

show 5 replies
namnnumbrtoday at 5:25 PM

Tired of people at work pasting raw ChatGPT output into chats, I coined the term "sloppypasta" and have written this rant to explain why it's rude and some guidelines for what to do instead

sloppypasta: Verbatim LLM output copy-pasted at someone, unread, unrefined, and unrequested. From slop (low-quality AI-generated content) + copypasta (text copied and pasted, often as a meme, without critical thought). It is considered rude because it asks the recipient to do work the sender did not bother to do themselves.

chewbachatoday at 11:10 PM

When you must remind someone to “think” when using a technology because the least resistant path is to not think… it feels like the technology isn’t really helping.

They are stealing our work, turning it into a model, and then renting our decisions to less intelligent people.

They (tech companies) don’t want us to be smart any more. They are commodifying intelligence.