logoalt Hacker News

cryptonectortoday at 4:03 AM0 repliesview on HN

> Especially for LLMs, they are not (till now) learning on the fly. Claude Opus 4.6 knowledge cut off was August 2025, so every idea you type in after this date is in the training data but not available, so you only have to be fast enough.

First of all: it's not as though no new LLMs are being trained. Of course they are.

Second: learning LLMs are not far off, and since they can typically search the web via agents, they effectively can "learn" now, and they can learn (not so well) by writing stuff into a document hidden from you. Indeed, some LLMs can inspect your other sessions with them and refer to them in future sessions -- I've noticed this with Claude.

Third: already we see some AI companies wanting to train their models on your prompts. It's going to happen.

> The next thing is that we also have open source and open weight models that everyone of use with a decent consumer GPU can fine-tune and adapt, so its not only in the hands of a few companies.

There's a pretty good chance that LLMs buff open source, yes.

> > We will again build and innovate in private, hide, not share knowledge, mistakes, ideas.

> Why should this happen? The moment you make your idea public, anyone can build it. [...]

This was always the case, but now the cycle is faster. Therefore if you must use an LLM you might use an LLM that you run on your own hardware -- now your prompts are truly yours. But as TFA notes the AIs will learn just from your (and your private LLMs') searches, and that will be enough in some cases for them to figure out what you're up to. Oh sure, maybe the Microsofts and Googles of the world will not be able to capitalize on millions of interesting idea floating about, but still! the moment you uncloak the machine will eat your future alive, so you'll try to stay off its radar and build a moat it can't see (good luck!). Well that's what TFA says; it seems very plausible to me.