I'm reluctant to dignify LLMs with a term like "prompt injection" because that implies it's the unusual case
prompt injection is a thing that's just gonna happen with LLMs as they stand
the security model is fundamentally stupid
1. build a great big pile of all the good and bad information in the world
2. feed it to a nondeterministic stochastic parrot
3. put a filter on the output of the parrot and block the bad stuff
4. wtf did you expect to happen, you're doing security by regex on an input you can vary freely