I think there is a lot that we can learn from genAI but it seems like we are determined to learn the opposite lesson. I think that anywhere you find an LLM being a legitimately great tool, you've probably found a flawed system that is measuring the wrong thing or that is systematically being entrained incorrectly. Like, note-taking. LLMs are great at "summarizing" by finding "important highlights" and this definitely tells us something but it's not something about LLMs

0

If you have a fediverse account, you can quote this note from your own instance. Search https://mastodon.social/users/glyph/statuses/114598906270657420 on your instance and quote it. (Note that quoting is not supported in Mastodon.)