logoalt Hacker News

observationistlast Thursday at 11:59 PM3 repliesview on HN

This is definitely fascinating - being able to do AI brain surgery, and selectively tuning its knowledge and priors, you'd be able to create awesome and terrifying simulations.


Replies

nottorpyesterday at 11:03 AM

You can't. To use your terms, you have to "grow" a new LLM. "Brain surgery" would be modifying an existing model and that's exactly what they're trying to avoid.

ilakshyesterday at 10:06 AM

Activation steering can do that to some degree, although normally it's just one or two specific things or rather than a whole set of knowledge.

eek2121yesterday at 1:31 AM

Respectfully, LLMs are nothing like a brain, and I discourage comparisons between the two, because beyond a complete difference in the way they operate, a brain can innovate, and as of this moment, an LLM cannot because it relies on previously available information.

LLMs are just seemingly intelligent autocomplete engines, and until they figure a way to stop the hallucinations, they aren't great either.

Every piece of code a developer churns out using LLMs will be built from previous code that other developers have written (including both strengths and weaknesses, btw). Every paragraph you ask it to write in a summary? Same. Every single other problem? Same. Ask it to generate a summary of a document? Don't trust it here either. [Note, expect cyber-attacks later on regarding this scenario, it is beginning to happen -- documents made intentionally obtuse to fool an LLM into hallucinating about the document, which leads to someone signing a contract, conning the person out of millions].

If you ask an LLM to solve something no human has, you'll get a fabrication, which has fooled quite a few folks and caused them to jeopardize their career (lawyers, etc) which is why I am posting this.

show 3 replies