Problem: LLMs can't defend against prompt injection.
Solution: A specialized filtering model that detects prompt injections.
Problem: That too is susceptible to bypass and prompt injection.
Solution: We reduce the set of acceptable instructions to a more predictable space and filter out anything that doesn't match.
Problem: If you over-specialize, the LLM won't understand the instructions.
Solution: We define a domain-specific language in the system prompt, with all allowable commands and parameters. Anything else is ignored.
Problem: We just reinvented the CLI.