This post is obviously (almost insultingly) written by AI. That being said, the idea behind the post is a good one (IaC taken to an extreme). This leaves me at a really weird spot in terms of how I feel about it.
You’d think people would at least spend 2 minutes changing obvious tells like “Why This Matters”…
It feels like intellectual dishonesty when it's not declared at the top of the article. I have no issues with AI, when the authors are honest about their usage. But if you stamp your name to an article without clear mention that LLMs wrote at least a significant piece of it, it feels dishonest and I disconnect from it.
It's weird it looks like only a small % of comments on here have caught on to the obvious LLM-ness of it all (I missed it the first go-around but on second read, you're is absolutely correct).
I'm wondering once the exceedingly obvious LLM style creeps more and more into the public mind if we're going to look back at these blog posts and just cringe at how blatant they were in retrospect. The models are going to improve (and people will catch on that you can't just use vanilla output from the models as blog posts without some actual editing) and these posts will just stand out like some very sore thumbs.
(ps all of the above 100% human written ;)