Ghostty has a really well-balanced AI usage policy. It doesn't ban AI tools outright, but it sets clear boundaries to prevent the common problems we're seeing in open source contributions these days.

What stands out is that it's not about being anti-AI. The policy explicitly says the maintainers use AI themselves. The rules are there because too many people treat AI as a magic button that lets them contribute without actually understanding or testing what they're submitting. The requirement that AI-generated PRs must be for accepted issues only, fully tested by humans, and properly disclosed feels like basic respect for maintainers' time.

I'm thinking of adopting something similar for my projects, even though they're not at Ghostty's scale yet. Better to set expectations early.

0

If you have a fediverse account, you can quote this note from your own instance. Search https://hollo.social/@hongminhee/019bec38-24d1-7625-a09b-c23e46719d79 on your instance and quote it. (Note that quoting is not supported in Mastodon.)