logoalt Hacker News

soninkyesterday at 11:34 AM3 repliesview on HN

Suggestions are absolutely fine. But this is baiting. Chatgpt could have easily given me that information without the bait. And I would have happily consumed it. And maybe if it did it once, it was fine - but it kept on doing it - bait after bait after bait.

The objective was to increase the engagement "metrics" clearly. The seems to me as if the leadership will take all 'shortcuts' required for growth.


Replies

JimDabelltoday at 1:03 AM

It’s worse than baiting. What happens a lot to me is:

Me: [Explains situation, followed by a request.]

AI: [7–8 paragraphs and bullet point lists explaining the situation back to me]. Would you like me to [request]?

Me: That’s literally what I just asked you to do.

taneqtoday at 10:37 AM

It might not even be the leadership at this stage. It’s entirely possible that “rounds of conversation” is a metric that their reinforcement learning has been told to optimise.

llm_nerdyesterday at 12:00 PM

This seems overly cynical.

Firstly, tl;dr; is a very real thing. If the user asks a question and the LLM both answers the question but then writes an essay about every probable subsequent question, that would be negatively overwhelming to most people, and few would think that's a good idea. That isn't how a conversation works, either.

Worse still if you're on a usage quota or are paying by token and you ask a simple question and it gives you volumes of unasked information, most people would be very cynical about that, noting that they're trying to saturate usage unprompted.

Gemini often does the "Would you like to know more about {XYZ}" end to a response, and as an adult capable of making decisions and controlling my urges, 9 times out of 10 I just ignore it and move on having had my original question satisfied without digging deeper. I don't see the big issue here. Every now and then it piques me, though, and I actually find it beneficial.

The prompts for possible/probable follow-up lines of inquiry are a non-issue, and I see no issue at all with them. They are nothing compared to the user-glazing that these LLMs do.

show 2 replies