You could literally ask the LLM to obfuscate it and I bet it would do a pretty good job. Good luck parsing 1,000 lines of code manually to identify an exploit that you’re not even specifically looking for.
Yup, add in some poetic prompt injection…..
Yup, add in some poetic prompt injection…..