"Design Patterns for Securing LLM Agents against Prompt Injections" is an excellent new paper that provides six design patterns to help protect LLM tool-using systems (call them "agents" if you like) against prompt injection attacks

Here are my notes on the paper simonwillison.net/2025/Jun/13/

0

If you have a fediverse account, you can quote this note from your own instance. Search https://fedi.simonwillison.net/users/simon/statuses/114676317358903933 on your instance and quote it. (Note that quoting is not supported in Mastodon.)