logoalt Hacker News

cactusplant7374yesterday at 7:20 PM7 repliesview on HN

Unless you've discovered the secret sauce, LLM comments are very obvious. Even Altman revealed that they focused on coding at the expense of writing.


Replies

kube-systemyesterday at 8:16 PM

With the current batch of SOTA models, it is not hard to prompt a model to pass the sniff test on social media forums. If you don't believe me, try it.

All you really need to do is give it some guidelines of a style to follow and styles to avoid. There's also a bunch of skills people have already written to accomplish this.

dgellowyesterday at 7:29 PM

The obvious ones are the ones you notice

show 1 reply
carlgreeneyesterday at 7:47 PM

I have worked with LLMs for a couple years at a very non-technical level and it was not that difficult to give it proper prompting and reference material.

If you are reading LLM content just about everywhere and have no idea. Obviously there are easy to spot things, but the stuff you don't spot is the stuff you don't spot

potsandpansyesterday at 7:23 PM

People that like to fancy themselves as good llm content detectors just end up accusing everything they don't like as llm content.

The only thing worst than a slop comment are the people that bitch about it incessantly. I'm convinced it's become a new expression of a mental illness.

show 2 replies
crooked-vyesterday at 7:37 PM

[flagged]