Stop personifying LLMs. It assigns agency and responsibility to a tool, and we all know you can't blame (or credit!) the tools.
My LLM wrote this code, and then committed it.
↓
I wrote this code and committed it using an LLM.
Getting this language wrong skews our relationship with our tools. It lets us dismiss our responsibility to check and correct the output, while preventing us from crediting ourselves for the final result.
What it means to be human should grow from new tech.