I know some people are thinking "well pulling off this kind of thing, it would have to be controlled with intent of a human actor"

It doesn't have to be.

1. A human could *kick off* such a process, and then it runs away from them.
2. It wouldn't even require a specific prompt to kick off a worm. There's enough scifi out there for this to be something any one of the barely-monitored openclaw agents could determine it should do.

Whether it's kicked off by a human explicitly or a stray agent, it doesn't require "intentionality". Biological viruses don't have interiority / intentionality, and yet are major threats that reproduce and adapt.

The interesting thing about the AI worm being imminent thing is this is the first time where I have said something about AI where most of the well-informed sides of anti-ai and pro-ai friends I have both fully agree with me. If you are paying attention enough, you can see that all the pieces are falling in place.

In fact, the biggest debate is whether this has happened already, and we just haven't seen proof of it yet. I don't know. Given how long things like the xz attack have sat undetected, and given how much chaos of computation is happening in datacenter usage right now, I wouldn't doubt it.

0

If you have a fediverse account, you can quote this note from your own instance. Search https://social.coop/users/cwebber/statuses/116188395204368076 on your instance and quote it. (Note that quoting is not supported in Mastodon.)