In my day to day work, I am starting to work with a lot more codebases that have large or small parts written using LLMs. I have heard the idea that the LLM steals the signal in human communication that 'someone thought this was important to write down'. An analogous corollary is the thinking that happens as you write! One thing that the LLM has taken from people, it seems, is that if you are about to sit down and write 1,700 lines of code as a human, at some point you will go 'Jesus fucking Christ, is this really the right way to do this? Am I even solving the right problem?'.
But if the machine sits and writes 1,700 lines of code on your behalf, you are not forced to confront your sin until it is much too late. Only then, you have nothing to do but live in the fallen world that you have made for yourself, the world of helper functions and perfectly grammatical English comments that simply recite the next line of code.
But if the machine sits and writes 1,700 lines of code on your behalf, you are not forced to confront your sin until it is much too late. Only then, you have nothing to do but live in the fallen world that you have made for yourself, the world of helper functions and perfectly grammatical English comments that simply recite the next line of code.