I think there is a lot that we can learn from genAI but it seems like we are determined to learn the opposite lesson. I think that anywhere you find an LLM being a legitimately great tool, you've probably found a flawed system that is measuring the wrong thing or that is systematically being entrained incorrectly. Like, note-taking. LLMs are great at "summarizing" by finding "important highlights" and this definitely tells us something but it's not something about LLMs