logoalt Hacker News

mjr00yesterday at 5:49 PM1 replyview on HN

> It's a matter of standards. [...] when I see someone not doing it that says things to me about who they are as a person.

When you're communicating with a person, sure. But the point is this isn't communicating with a person or other sentient being; it's a computer, which I guarantee is not offended by terseness and lack of capitalization.

> It's akin to shopping carts in parking lots.

No, not returning the shopping cart has a real consequence that negatively impacts a human being who has to do that task for you, same with littering etc. There is no consequence to using terse, non-punctuated, lowercase-only text when using an LLM.

To put it another way: do you feel it's disrespectful to type "cat *.log | grep 'foo'" instead of "Dearest computer, would you kindly look at the contents of the files with the .log extension in this directory and find all instances of the word 'foo', please?"

(Computer's most likely thoughts: "Doesn't this idiot meatbag know cat is redundant and you can just use grep for this?")*


Replies

mbreeseyesterday at 9:49 PM

I’m not worried about the LLM getting offended if I don’t write complete sentences. I’m worried about not getting good results back. I haven’t tested this, and so I could be wrong, but I think a better formed/grammatically correct prompt may result in a better output. I want to say the LLM will understand what I want better, but it has no understanding per se, just a predictive response. Knowing this, I want to get the best response back. That’s why I try to have complete sentences and good (ish) grammar. When I start writing rushed commands back, I feel like I’m getting rushed responses back.

I also tell the LLM “thank you, this looks great” when the code is working well. I’m not expressing my gratitude… I’m reinforcing to the model that this was a good response in a way it was trained to see as success. We don’t have good external mechanisms to give reviews to an LLM that isn’t based on language.

Like most of the LLM space, these are just vibes, but it makes me feel better. But it has nothing to do with thinking the LLM is a person.