logoalt Hacker News

kbelderlast Wednesday at 7:10 AM1 replyview on HN

That is a good point, and I think the takeaway is that there are lots of degrees of freedom here. Open training data would be better, of course, but open weights is still better than completely hidden.


Replies

enriqutolast Wednesday at 7:45 AM

I don't see the difference between "local, open weights" and "local, proprietary weights". Is that just the handful of lines of code that call the inference?

The model itself is just a binary blob, like a compiled program. Either you get its source code (the complete training data) or you don't.