I will admit, I made this thread when pretty frustrated and upset about it. SystemD is so key to the security of many peoples' machines. I don't necessarily see having security reviews be a problem the same way that codegen and etc are. And I was wrong about the PR review vulnerability risk in that *for now* afaict the review bot is just performing read-only security review, is not taking auto-action on merging, which is the real risk.

So maybe I overreacted? But Poettering's comment reads the way that most comments I have read that have been drawn into AIgen code have gone, which is "you gotta admit that things are changing, these things are getting really good" and then opening the door to aigen contributions. Which I am very wary of...

@cwebberChristine Lemmer-Webber This. I do think that writing code oneself and running it through checkers (any, and the more the better, roughly, as long as they don't replace humans) is a good thing. But these checkers should run sandboxed, just flag issues -- as any linter. And if that stuff is LLM-powered, so be it. But agentic coding? LLM-driven suggestions/refactoring? I'm soooo weary of this.

0

If you have a fediverse account, you can quote this note from your own instance. Search https://todon.eu/users/ljrk/statuses/116221789658869974 on your instance and quote it. (Note that quoting is not supported in Mastodon.)