We should have more hygiene when it comes to AI.
Text coming out of an LLM should be in a special codeblock of Unicode, so we can see it is generated by AI.
Failing to do so (or tampering with it) should be considered bad hygiene, and should be treated like a doctor who doesn't wash their hands before surgery.
What will that accomplish? Does it give license to developers to check in code that they don't understand/trust fully?
Ultimately, people should be responsible for the code they commit, no matter how it was written. If AI generates code that is so bad that it warrants putting up warning sign, it shouldn't be checked in.
> Text coming out of an LLM should be in a special codeblock of Unicode, so we can see it is generated by AI.
Why not start with manual tagging, like "Ad"?
> Text coming out of an LLM should be in a special codeblock of Unicode, so we can see it is generated by AI.
That's exactly my proposed solution:
https://jacquesmattheij.com/classes-of-originality/