logoalt Hacker News

5o1ecisttoday at 5:32 PM4 repliesview on HN

> seems to be usage of a local llm that rewrites the text while keeping meaning untouched.

There are no two ways of expressing something in ways that might create equal impressions.

Relevant: https://www.perplexity.ai/search/hey-hey-someone-on-hn-wrote...


Replies

mhitzatoday at 6:20 PM

I don't really understand the argument your proposing.

Is it impressions in a stylistic sense (flurishes to the language used), which is a what I'm arguing the LLM usage for.

Or is it impression in the subjective sense of what an author would instill through his message. Feelings, imagry, and such.

Or the impression given to the reader? "This person gives me the impression that they know what they talk about", or "don't know what they talk about?"

I don't know which argument your proposing, but I'd like to make an observation of the LLM usage. I don't know what model the perplexity response is based on, but some of them are "eager to please" by default in conversation("you're absolutely right" and all the other memes). If you "preload" it with a contrarian approach (make a brutally honest critique of this comment in reply to this other comment) it will gladly do a 180 https://chatgpt.com/s/t_699f3b13826c8191b701d0cc84923e71

show 1 reply
palmoteatoday at 6:12 PM

> There are no two ways of expressing something in ways that might create equal impressions.

> Relevant: https://www.perplexity.ai/search/hey-hey-someone-on-hn-wrote...

Did you just use an LLM to write your comment and are citing it as a source?

show 1 reply
kerisitoday at 5:37 PM

link doesn't work, it says the thread is private

show 1 reply
StilesCrisistoday at 5:38 PM

The link is private.

show 1 reply