I almost scrolled past the "Safety" section, because in the past it always seemed sort of silly sci-fi scaremongering (IMO) or things that I would classify as "sharp tool dangerous in the wrong hands". But I'm glad I stopped, because it actually talked about real, practical issues like the prompt injections that you mention. I wonder if the industry term "safety" is pivoting to refer to other things now.
I thought AI safety was dumb/unimportant until I saw this dataset of dangerous prompts: https://github.com/mlcommons/ailuminate/blob/main/airr_offic...
I don't love the idea of knowledge being restricted... but I also think these tools could result in harm to others in the wrong hands