Yesterday, had an argument with an AI booster. I'm not going to link, both because I don't want to platform that and because I don't want anyone to go harass them. But what I thought was very interesting was that I asked point-blank if there was any degree to which ethical problems with LLMs could make them not want to use AI โ€” they told me no, there was not, and implied that they evaluated AI purely on the basis of its efficacy.

I don't have time nor the inclination to argue that point with them further when it comes to AI. But I do think there's a broader point that is worth critical examination, especially as tech continues to build out surveillance, age verification, automated filtering and censoring, and other tools that do immense damage when used by authoritarians.

We *cannot* afford to evaluate tech purely based on whether it "works" or not.

0
2
0

If you have a fediverse account, you can quote this note from your own instance. Search https://wandering.shop/users/xgranade/statuses/116116694888051147 on your instance and quote it. (Note that quoting is not supported in Mastodon.)