logoalt Hacker News

cs702yesterday at 9:31 PM2 repliesview on HN

Our brains, which are organic neural networks, are constantly updating themselves. We call this phenomenon "neuroplasticity."

If we want AI models that are always learning, we'll need the equivalent of neuroplasticity for artificial neural networks.

Not saying it will be easy or straightforward. There's still a lot we don't know!


Replies

4b11b4today at 1:43 AM

I wasn't explicit about this in my initial comment, but I don't think you can equate more forward passes to neuroplasticity. Because, for one, simply, we (humans) also /prune/. And... Similar to RL which just overwrites the policy, pushing new weights is in a similar camp. You don't have the previous state anymore. But we as humans with our neuroplasticity do know the previous states even after we've "updated our weights".

nemomarxyesterday at 10:54 PM

How would you keep controls - safety restrictions - Ip restrictions etc with that, though? the companies selling models right now probably want to keep those fairly tight.

show 1 reply