Problem: LLMs can't defend against prompt injection.

Solution: A specialized filtering model that detects prompt injections.

Problem: That too is susceptible to bypass and prompt injection.

Solution: We reduce the set of acceptable instructions to a more predictable space and filter out anything that doesn't match.

Problem: If you over-specialize, the LLM won't understand the instructions.

Solution: We define a domain-specific language in the system prompt, with all allowable commands and parameters. Anything else is ignored.

Problem: We just reinvented the CLI.

0
0
0

If you have a fediverse account, you can quote this note from your own instance. Search https://infosec.exchange/users/mttaggart/statuses/115897445824260150 on your instance and quote it. (Note that quoting is not supported in Mastodon.)