I can't reply on Bluesky, since this post isn't bridged, but let me address Anil's point as clearly as I can.

Co-opting the language and theory of "harm reduction" in order to advocate for using AI tools *is* a form of AI boosterism. That's not intellectually dishonest, it's being clear about what effects co-opting that language and theory has.

bsky.app/profile/did:plc:sg2e2

In order for "harm reduction" to make sense as an approach to AI proliferation, you have to posit that, in some fundamental way, there's a valid use for these things β€” or at least, a way of using them that doesn't cause harm to the people around you.

Harm reduction works in the case of drugs because the risks of harm to the immediate individual can be drastically reduced, and because harm to those around users can be nearly eliminated.

Neither of those is true for AI.

0

If you have a fediverse account, you can quote this note from your own instance. Search https://wandering.shop/users/xgranade/statuses/115183573846655555 on your instance and quote it. (Note that quoting is not supported in Mastodon.)