If you bothered to read it you’d find that I am embracing the tools and I still feel there is craft. It’s just different.
But snark away. It’s lazy. And yes it is so damn tedious.
Speaking directly, if I catch the scent of ChatGPT, it's over.
People put out AI text, primarily, to run hustles.
So its writing style is a kind of internet version of "talking like a used car salesman".
With some people that's fine, but anyone with a healthy epistemic immune system is not going to listen to you.
If you want to save a few minutes, you'll just have to accept that.
I agree with that for programming, but not for writing. The stylistic tics are obtrusive and annoying, and make for bad writing. I think I'm sympathetic to the argument this piece is making, but I couldn't make myself slog through the LinkedIn-bot prose.
"But snark away. It’s lazy. And yes it is so damn tedious."
Looks like this comment is embracing the tools too?
I'd take cheap snark over something somebody didn't bother to write, but expect us to read.
Having an LLM write your blog posts is also lazy, and it's damn tedious to read.
If you feel so strongly about your message, why would you outsource writing out your thoughts to such a large extent where people can feel how reminiscent it sounds of LLM writing instead of your own? It's like me making a blogpost by outsourcing the writing to someone on Fiverr.
Yes it's fast, it's more efficient, it's cheap - the only things we as a society care about. But it doesn't convey any degree of care about what you put out, which is probably desirable for a personal, emotionally-charged piece of writing.
I think the Oxide computer LLM guidelines are wise on this front:
> Finally, LLM-generated prose undermines a social contract of sorts: absent LLMs, it is presumed that of the reader and the writer, it is the writer that has undertaken the greater intellectual exertion. (That is, it is more work to write than to read!) For the reader, this is important: should they struggle with an idea, they can reasonably assume that the writer themselves understands it — and it is the least a reader can do to labor to make sense of it.
https://rfd.shared.oxide.computer/rfd/0576#_llms_as_writers
The heavy use of LLMs in writing makes people rightfully distrustful that they should put the time in to try to read what's written there.
Using LLMs for coding is different in many ways from writing, because the proof is more there in the pudding - you can run it, you can test it, etc. But the writing _is_ the writing, and the only way to know it's correct is to put in the work.
That doesn't mean you didn't put in the work! But I think it's why people are distrustful and have a bit of an allergic reaction to LLM-generated writing.