logoalt Hacker News

minimaxirlast Sunday at 2:00 AM2 repliesview on HN

You can stop LLMs from using em-dashes by just telling it to "never use em-dashes". This same type of prompt engineering works to mitigate almost every sign of AI-generated writing, which is one reason why AI writing heuristics/detectors can never be fully reliable.


Replies

dcrelast Sunday at 2:30 AM

This does not work on Bryan, however.

jgalt212last Sunday at 2:42 PM

I guess, but if even in you set aside any obvious tells, pretty much all expository writing out of an LLM still reads like pablum without any real conviction or tons of hedges against observed opinions.

"lack of conviction" would be a useful LLM metric.

show 1 reply