> "Rather than scanning for known patterns, Claude Code Security reads and reasons about your code the way a human security researcher would: understanding how components interact, tracing how data moves through your application, and catching complex vulnerabilities that rule-based tools miss."
Fascinating! Our team has been blending static code analysis and AI for a while and think it's a clever approach for the security use case the Anthropic team's targeting here.
That quote jumped out at me for a different reason... it's simply a falsehood. Claude code is built with an LLM which is a pattern-matching machine. While human researchers undoubtedly do some pattern matching, they also do a whole hell of a lot more than that. It's a ridiculous claim that their tool "reasons about your code the way a human would" because it's clearly wrong--we are not in fact running LLMs in our heads.
If this thing actually does something interesting, they're doing their best to hide that fact behind a steaming curtain of bullshit.