That's one good use of LLMs: fuzzy testing / attack.
Not contradicting this (I am sure it's true), but why is using an LLM for this qualitatively better than using an actual fuzzer?
Not contradicting this (I am sure it's true), but why is using an LLM for this qualitatively better than using an actual fuzzer?