AI / LLM (I mock a terrible sales pitch)

Oh man, if only there were some human-based solution to the problem of AIs with commit permissions autonomously accepting malicious pull requests at odd hours of the night. If only humans could do something about that manually and at their own pace, but no, all we can do is buy another AI to watch the first AI. Better buy a third one just in case

(The sales pitch begins literally next sentence)

Blog post excerpt: We're entering an era where AI agents attack other AI agents. In this campaign, an AI-powered bot tried to manipulate an AI code reviewer into committing malicious code. The attack surface for software supply chains just got a lot wider. This wasn't a human attacker working weekends. This was an autonomous bot scanning repos continuously. You can't defend against automation with manual  controls , you need automated guardrails.

This post breaks down each attack, shows the evidence, and explains what you can do to protect your workflows.
0

If you have a fediverse account, you can quote this note from your own instance. Search https://infosec.exchange/users/0xabad1dea/statuses/116156086294765872 on your instance and quote it. (Note that quoting is not supported in Mastodon.)