> It's unethical (if not outright fraudulent) to publish LLM work as if it were your own.
I disagree on that. It's really a gray area.
If it's some lazy vibecoded shit, I think what you say totally applies.
If the human did the thinking, gave the agent detailed instructions, and/or carefully reviewed the output, then I don't think it's so clear cut.
And full disclosure, I'm reacting more to copilot here, which lists itself as the author and you as the co-author. I'm not giving credit to the machine, like I'm some appendage to it (which is totally what the powers-that-be want me to become).
> Claude setting itself as coauthor is a good way to address this problem, and it doing so by default is a very good thing.
I do agree that's a sensible default.
Telling someone you did something that you actually didn't do isn't a gray area, it's a lie.
Using AI tools to code and then hiding that is unethical imo.
> It's really a gray area.
Yes, it really depends on how much work the agent did produce. It could be as little as doing a renaming or a refactoring, or execute direct orders that require no creativity or problem solving. In which case the agent shouldn't be credited more than the linter or the IDE.