logoalt Hacker News

abdullahkhalidslast Friday at 11:59 PM4 repliesview on HN

I work in quantum computing. There is quite a lot of material about quantum computing out there that these LLMs must have been trained on. I have tried a few different ones, but they all start spouting nonsense about anything that is not super basic.

But maybe that is just me. I have read some of Terence Tao's transcripts, and the questions he asks LLMs are higher complexity than what I ask. Yet, he often gets reasonable answers. I don't yet know how I can get these tools to do better.


Replies

sothatsityesterday at 2:42 AM

This often feels like an annoying question to ask, but what models were you using?

The difference between free ChatGPT, GPT-5.2 Thinking, and GPT-5.2 Pro is enormous for areas like logic and math. Often the answer to bad results is just to use a better model.

Additionally, sometimes when I get bad results I just ask the question again with a slightly rephrased prompt. Often this is enough to nudge the models in the right direction (and perhaps get a luckier response in the process). However, if you are just looking at a link to a chat transcript, this may not be clear.

show 1 reply
jasonfarnonyesterday at 12:32 AM

"I don't yet know how I can get these tools to do better."

I have wondered if he has access to a better model than I, the way some people get promotional merchandise. A year or two ago he was saying the models were as good as an average math grad student when to me they were like a bad undergrad. In the current models I don't get solutions to new problems. I guess we could do some debugging and try prompting our models with this Erdos problem and see how far we get. (edit: Or maybe not; I guess LLMs search the web now.)

nazgul17yesterday at 12:23 AM

This was also my experience with certain algorithms in the realm of scheduling.

jomohkeyesterday at 12:10 AM

Which models did you try?