logoalt Hacker News

newsofthedayyesterday at 4:51 PM2 repliesview on HN

Would your prompt have been identical and produced identical results, today, tomorrow, which version of AI would you have used, were there bugs present that made the post or comment interesting that would have been absent in your response because the bug had been fixed already?


Replies

nobody9999yesterday at 10:44 PM

>Would your prompt have been identical and produced identical results, today, tomorrow, which version of AI would you have used, were there bugs present that made the post or comment interesting that would have been absent in your response because the bug had been fixed already?

Why is that relevant to GP's point?

I can't speak for anyone else, but I come to HN to discuss stuff with other humans. If I wanted an LLM's (it's not AI, it's a predictive text algorithm) regurgitations, I can generate those myself and don't need "helpful" HNers to do it for me unasked.

When I come here I want to have a discussion with other sentient beings, not the gestalt of training data regurgitated by a bot.

Perhaps that makes me old-fashioned and/or bigoted against interacting with large language models, but that's what I want.

In discussion, I want to know what other sentient beings think, not an aggregation of text tokens based on their probability of being used in a particular sequence determined by the data fed to model.

The former can (but may well not be) a creative, intellectual act by a sentient being. The latter will never be so, as it's an aggregation of existing data/information as a sequence of tokens cobbled together based on the frequency with which such tokens are used in a particular order in the model's corpus.

That's not to say that LLM are useless. They are not. But their place is not in "curious conversation," IMNSHO.

WesolyKubeczekyesterday at 5:52 PM

In any case, it should have some more thought to it, some summary, some highlight, what you find useful/insightful about it. Just dumping the response is lazy and disrespectful.

And if two people can get two opposite results by giving the same prompt which asks a very specific question to the same model, it looks like bunk anyway. LLMs don't care if they are correct.