Most of my pip installs come from Claude Code suggesting them now and me just hitting enter. Model was trained months ago, so it has no clue what got compromised this week. We built the worst possible filter for "is this package safe right now".
What filter?
You say you rely on CC to suggest software to install from the internet, and then you install it.
I haven't heard anyone suggest CC or any LLM as a "filter" for "is this package safe right now", and it seems like a very bad heuristic to me, not only, but also for the reason you gave.
Stale training data is part of it. But even a current model can't tell what setup.py is going to run on your box. Nothing actually inspects the package before it executes. You'd want something that pulls the metadata and checks what hooks are in there before anything runs.
By "the worst possible filter" do you mean "hitting enter when claude tells you to"?
Stop blaming the LLM for your laziness and lack of due diligence.