logoalt Hacker News

TeMPOraLyesterday at 3:39 PM2 repliesview on HN

Yes. I don't think you appreciate just how much information your comments provide. You just told us (and Claude) what the interesting problems are, and confirmed both the existence of relevant undocumented functions, and that they are the right solution to those problems. What you didn't flag as interesting, and possible challenges you did not mention (such as these APIs being flaky, or restricted to Apple first-party use, or such) is even more telling.

Most hard problems are hard because of huge uncertainty around what's possible and how to get there. It's true for LLMs as much as it is for humans (and for the same reasons). Here, you gave solid answers to both, all but spelling out the solution.

ETA:

> Is that how you think data gets fed back into models during training?

No, one comment chain on a niche site is not enough.

It is, however, how the data gets fed into prompt, whether by user or autonomously (e.g. RAG).


Replies

LatencyKillstoday at 12:40 AM

> Yes. I don't think you appreciate just how much information your comments provide

Lol... no. You don't know how I solved the problem and you just read everything that Claude did.

Absolutely nothing in the key part of my solution uses a single public API (and there are thousands). And you think that Claude can just "figure that out" when my HK comments gets fed back in during training?

I sincerely wish we'd see less /r/technology ridiculousness on HN.

jacquesmyesterday at 7:18 PM

I wonder how many 'ideas guys' will now think that with LLMs they can keep their precious to themselves while at the same bragging about them in online fora. Before they needed those pesky programmers negotiating for a slice of the pie, but this time it will be different.

Next up: copyright protection and/or patents on prompts. Mark my words.

show 1 reply