@glyph This article gets at a lot of things that I think are very on point. Specifically:
1. that supervising a semi-autonomous machine is wildly unlike doing the same work yourself. It's work that humans are phenomenally bad at. Literally, a huge driver behind inventing automation in the first place was to save us from having to do constant, high vigilance supervision, because we just simply can't.
2. that the framing, anchoring, and filtering that LLMs do is a deep, critical, pervasive, and widely ignored risk that probably cannot be mitigated. As soon as you insert one of them into a workflow or communication stream, then you've limited what's possible to do or think or say to what an LLM can and will reproduce.
If you have a fediverse account, you can quote this note from your own instance. Search https://hachyderm.io/users/jenniferplusplus/statuses/114666545673104635 on your instance and quote it. (Note that quoting is not supported in Mastodon.)
