This is the new trend that keeps me awake at night. It's that adversaries now have access to off the book inference and that they will be able to scan pretty much any widely used open source project and discover and exploit zero days. I think making it closed source offers a bit more security but will only buy time as it is possible to reverse engineer them with current closed source models with extreme ease.
If you are sufficiently funded then you could benefit from the flip side of discovery but it looks bleak if you are a sole maintainer on a large project that is a dependency in many deployed instances without any revenue or donations, plus there is nobody digging deep enough to care or spend inference ( would your company spend the money on extra inference to is the question, more often than not) on both sides of the fence, we are going to see massive disruptions across the board.
Cybersecurity is becoming a proof-of-work of sorts and the race is on. There might be unknown number of zero days being silently discovered and deployed, likely have an impact on the economics too, thus making the access far more widespread.
I do wonder if this means our tech stacks will go back to being boring and simple as possible...you wouldn't hack a static html website being served on nginx would you?
It's nothing new - even without LLMs you have automated tools that will try stuff to see if your application is vulnerable. You can abuse misconfigured nginx server. To be fair, to your point, LLMs are amazing pattern recognizers = this pattern it has seen in this codebase applies to that codebase so vulnerability is most likely; I'm unsure if they can "innovate" (still, recognizing patterns is enough); this pattern it has seen causes a crash, but I don't know if we're at the point where it can connect two and two together and use a set of unrelated code issues to, for example, exfiltrate credentials