As much as I bash on the stupid ways that companies are trying to shove AI down everyone's throats, it does seem to be remarkably good at finding vulnerabilities. I'm a little concerned that our over-reliance on racing to patch everything 24/7 isn't going to scale well for much longer (if indeed it ever has).
As this blog post from Anthropic points out, this is becoming a frequent refrain from people advocating that companies invest more in AI.
I'm not necessarily saying they're wrong in this respect. But I am generally wary of any industry that claims you need more of what it is selling just so you can offset the negative externalities caused by the unbridled use of its technology.
"Claude Opus 4.6, released today, continues a trajectory of meaningful improvements in AI models’ cybersecurity capabilities. Last fall, we wrote that we believed we were at an inflection point for AI's impact on cybersecurity—that progress could become quite fast, and now was the moment to accelerate defensive use of AI. The evidence since then has only reinforced that view. AI models can now find high-severity vulnerabilities at scale. Our view is this is a moment to move quickly—to empower defenders and secure as much code as possible while the window exists."