I've always used "proper" sentences for LLMs since day 1. I think I do a good job at not anthropomorphizing them. It's just software. However, that doesn't mean you have to use it in the exact same ways as other software. LLMs are trained on mostly human-made texts, which I imagine are far more rich with proper sentences than Google search queries. I don't doubt that modern models will usually give you at least something sensible no matter the query, but I always assumed that the results would be better if the input was more similar to its training data and was worded in a crystal-clear manner, without trying to get it to fill the blanks. After all, I'm not searching for web pages by listing down some disconnected keywords, I want a specific output that logically follows from my input.
It's a mirror. Address it like it's a friendly person and it will glaze you; that's the source of much of the sycophancy.
My queries look like the beginning of encyclopedia articles, and my system prompt tells the machine to use that style and tone. It works because it's a continuation engine. I start the article describing what I want to be explained like it's the synopsis at the beginning of the encyclopedia article, and the machine completes the entry.
It doesn't use the first person, and the sycophancy is gone. It also doesn't add cute bullshit, and it helps me avoid LLM psychosis, of which the author of this piece definitely has a mild case.
I'm also tired of seeing claims about productivity improvements from engineers who are self reporting; the METR paper showed those reports are not reliable.