I feel you. I don't think I've ever finished reading a sentence that started with "I asked <LLM> and he said..."
The "I asked <LLM>" disclosures vary between a) implying the LLM is an expert resource, which is bad, and b) disclosure that an LLM was referenced with the disclosure being transparent about it, which is typically good but more context dependent.
Unfortunately (a) is more common, and the backlash against has been removing the communinity incentive to provide (b).
These are the worst. I'm fine with you dumping your own half formed thoughts into an LLM, getting something reasonably structured out, and then rewriting that in your own voice, elaborating, etc.
But the "This is what ChatGPT said..." stuff feels almost like "Well I put it into a calculator and it said X." We can all trivially do that, so it really doesn't add anything to the conversation. And we never see the prompting, so any mistakes made in the prompting approach are hidden.
The only thing worse is "I asked my AI and he said"
You don't possess an AI, you are using someone's AI
This is usually an "auto-skip" for me as well.
Still preferable to just pasting it without revealing the source. LLMs have become a brain prosthesis for some people which is incredibly sad.
I work for a political party (not Ameican) and the President is addicted to using chatgpt for facebook posts.
> "I asked <LLM> and he said..."
An alternative I tried was sharing links my LLM prompts/responses. That failed badly.
I like the parallel with linking to a Google/DuckDuckGo search term which is useful when done judiciously.
Creating a good prompt takes intelligence, just as crafting good search keywords does (+operators).
I felt that the resulting downvotes reflected an antipathy towards LLMs and the lack of taste of using an LLM.
The problem was that the messengers got shot (me and the LLM), even though the message of obscure facts was useful and interesting.
I've now noticed that the links to the published LLM results have rotted. It isn't a permanent record of the prompt or the response. Disclaimer: I avoid using AI, except for smarter search.
My take is orthogonal. Overall, I've become less tolerant of token-generators of all kinds (including people) of bad quality (including tropes, bad reasoning, clunky writing, whatever). But I digress.
If we want human "on the other end" we gotta get to ground truth. We're fighting a losing battle thinking that text-based forums can survive without some additional identity components.
I find the consistent anthropomorphization to be grating as well