logoalt Hacker News

mbreeseyesterday at 9:49 PM0 repliesview on HN

I’m not worried about the LLM getting offended if I don’t write complete sentences. I’m worried about not getting good results back. I haven’t tested this, and so I could be wrong, but I think a better formed/grammatically correct prompt may result in a better output. I want to say the LLM will understand what I want better, but it has no understanding per se, just a predictive response. Knowing this, I want to get the best response back. That’s why I try to have complete sentences and good (ish) grammar. When I start writing rushed commands back, I feel like I’m getting rushed responses back.

I also tell the LLM “thank you, this looks great” when the code is working well. I’m not expressing my gratitude… I’m reinforcing to the model that this was a good response in a way it was trained to see as success. We don’t have good external mechanisms to give reviews to an LLM that isn’t based on language.

Like most of the LLM space, these are just vibes, but it makes me feel better. But it has nothing to do with thinking the LLM is a person.