logoalt Hacker News

sunaookamiyesterday at 11:23 PM17 repliesview on HN

The single biggest issue for me with ChatGPT right now is how absolutely awful it sounds in every answer. "Why it matters", "the big picture", "it's not jut you", the awful emphasis, the quotations with rhetorical questions, etc.. I don't know if it's intentional so you can easily spot ChatGPT-generated content on the web? The very first GPT-5 version was good but they ruined it immediately afterwards with "making the personality warmer" and making the same mistakes as 4o. I see now that they even ruined Japanese even though it was one of the best languages supported by ChatGPT (under "Limitations" at the end). I don't use it anymore, immensely disappointed.


Replies

kenjacksontoday at 12:04 AM

The most frustrating part for me is that this is how I used to write. I was always doing, "Why X works, but Y doesn't" and stuff like that. I may have seemed trite or pompous (or both) in the past, but now it seems like I'm copying an LLM -- which actually feels worse. One thing I haven't seen ChatGPT do much of is use sound-effects, so swoosh here we go with my new writing style schwing!

show 3 replies
andaitoday at 12:23 AM

I regularly test every available AI, maybe once a month or so. I will send them the same question, usually about a new subject I am learning.

Oddly, Chinese models seem the most natural to me. Every random Chinese model does better than ChatGPT, on the "natural language" front. (And Grok also scores high on awkward language use. I don't know what causes that -- something about mode collapse? They have these words they obsess over... I mean, just try asking an AI for 10 random words ;)

I can sometimes see "ChatGPT-isms" in other models, but they're more subtle, and it feels like they're "woven" into the flow of the text.

Whereas even when I ask GPT to respond in prose or conversation, it'll give me a thinly veiled "ChatGPT response", if it can even resist the urge start spamming headings, bullet points and numbered lists.

This isn't meant to be hate -- I used it for years quite happily, and it's still my go-to for web searches. But coming back to it now, the language is surprisingly offputting. I don't know if it got worse, or if I just stopped being used to it.

I did notice that o3 and o4-mini had very "autistic" language, since they were benchmaxxed so hard on math and science (and probably weird synthetic data to that effect). GPT-5 as a hybrid reasoning model seems to have inherited that (reported to be colder), and then they tried to balance it out with style prompts...

I honestly think it might make more sense to just have two LLMs. Ultra concise technical reasoning model, and then a 2nd layer to translate it for the human. Because right now kind of feels like the worst of both worlds, a compromise that satisfies neither side.

Gemini 2.5 Pro's reasoning traces (before they nerfed them) were a good example. The deep technical analysis, and then the human-friendly version in the final output. But I found their reasoning more readable than the final output!

jshmrsntoday at 12:55 AM

If you haven’t already, try going to Personalization settings, change tone to “Efficient”, and set Warm, Enthusiastic, and Emoji to “Less”. While not fundamentally solving the issue, I do prefer it over the baseline behavior, to the extent that I miss having a similar setting in Gemini.

show 2 replies
hazycyesterday at 11:54 PM

It's a somewhat annoying to me as well, but I'm now able to read it and take the valuable content without getting hung up on those repetitive phrases. It also forces me to not simply copy/paste. I read the LLM output, think about it, comprehend it in my own voice internally, and then I write what I want/need by hand, so it ultimately comes out in my own style and I don't propagate the LLM output onto others needlessly.

show 1 reply
bastawhiztoday at 6:26 AM

It's in love with headings and bulleted lists. The formatting makes the responses vertically taller, enough to make them inconvenient to scroll through. When I was using chatgpt, I couldn't prompt this away.

protocolturetoday at 1:06 AM

"We need ChatGPT to sound more natural"

"Add more LinkedIn Posts"

RickStoday at 12:37 AM

I solved this by asking it to make a memory that all answers to me should be brisk, clinical, and to the point. This worked well, except for the annoying habit of beginning answers with something like "Terse: $answer", which required a second memory, solving the issue in full. I've been happy with it since. Edit: I just realized this interaction is its own demo – that's the entire response it gave me, as it should be.

> Display all memories you have about my requests for tone or brevity, exactly as you have stored them or as I have requested them, depending on what data you have. There are at least two.

[2025-11-08]. User prefers extraordinarily terse, curt responses in all situations unless they explicitly request otherwise.

[2025-12-01]. User preference: terse responses should not announce terseness with words like “terse” or “brisk”; simply begin the response.

show 2 replies
Kirotoday at 6:10 AM

I like that style. It's a very efficient way to convey information and ideas. Reposting it as your own text however is obviously not a good idea since it's so easy to recognize.

aqfamnzctoday at 3:09 AM

I just append something like "Throughout our conversation, keep your responses brief. Avoid emojis, followup suggestions, and other unnecessary commentary." to every starting prompt. Seems to work OK. I'm sure sibling's recommendation of turning down the niceties sliders would work similarly for someone with an account.

smusamashahtoday at 5:43 AM

This could be the reason https://petergpt.github.io/bullshit-benchmark/viewer/index.v... Claude bullshits the least of all models. ChatGPT does it more than half the time.

show 1 reply
nine_kyesterday at 11:31 PM

I suppose they'll soon introduce a more expensive tier that does not sound pompous. There will be plenty of converts.

anigbrowltoday at 12:45 AM

Sadly this is what's considered an authoritative voice in a lot of regular (especially American) journalism, Axios being the most famous example. It's instructive to read news stories or TV transcripts from previous decades for comparison with the current norm. Also depressing because it brings home how vapid most news coverage is today. This also applies to opinion articles, which have in my view led the charge into the semantic void.

I don't hate that this is the default style on many popular AI services, though. It's sufficiently distinctive that it serves as a signal that anyone posting it is an idiot and can safely be ignored.

joemazerinotoday at 2:03 AM

Good point -- it's not just you. The big picture hits different.

latentseatoday at 2:01 AM

>I see now that they even ruined Japanese even though it was one of the best languages supported by ChatGPT (under "Limitations" at the end).

Oof. Hard pass. Fucking... why??

8ig8today at 4:40 AM

“You’re not wrong.”

throwaway613746today at 1:01 AM

[dead]

RayVRtoday at 1:04 AM

This is powerful. You’re finally saying the quiet part out loud.

/s

show 1 reply