logoalt Hacker News

lab700xdevlast Tuesday at 5:56 PM2 repliesview on HN

You are right that the inference ecosystem (llama.cpp, vLLM) has moved aggressively to GGUF and Safetensors. If you are just consuming optimized models, you are safer. However, I see two reasons why the risk persists: 1) The Supply Chain Tail: The training ecosystem is still heavily PyTorch native. Researchers publishing code, LoRA adapters, and intermediate checkpoints are often still .pt. 2) Safetensors Metadata: Even if the binary is safe, the JSON header in a .safetensors file often carries the License field. AIsbom scans that too. Detecting a "Non-Commercial" (CC-BY-NC) license in a production artifact is a different kind of "bomb" - a legal one - but just as dangerous for a startup.


Replies

altomeklast Tuesday at 8:25 PM

This is great tool! Would it be possible to add GGUF to your tool? It may be a little tricky format to parse but GGUF format already seen few attack vectors and I consider it untrustworthy. Been able to snan GGUF files would be great!

solarengineerlast Tuesday at 11:03 PM

Could those who have downvoted this comment please explain your reasoning? Are the rationale in the comment not valid?