This is the thing, right?

When a _person_ does something absolutely inane and I call them on it, they will eventually stop doing the inane thing.

I worked with this guy a while back. He was junior, I was the tech lead on the project. I set the coding standards for it and he didn't like what I picked because it required a lot of work up front. It required docs on public methods, it required fairly high test coverage targets (both branch and instruction), it required following specific naming conventions and programming standards. I often would tell him to make his commits smaller.

No smaller than that.

Smaller still.

You couldn't just turn your brain off.

He fought me every step of the way, but eventually:

He learned and what's more, he began to internalize why I was doing it the way that I did.

Code reviews started flying by. It became _easy_.

AIs, even the really good ones, don't learn from this sort of process. Oh sure they have _memory_, but it is itself unreliable and subject to the same context window restrictions as everything else. They are by their nature ephemeral.

So there's a lot less incentive in me giving it a thorough review.

Unless it is my agent and my project.

0

If you have a fediverse account, you can quote this note from your own instance. Search https://hachyderm.io/users/hrefna/statuses/116144497372489981 on your instance and quote it. (Note that quoting is not supported in Mastodon.)