@tomjennings could you be clearer about what the harm is? For me, I think the biggest harms in creating LLMs is using content like images and text without consent of the creator by ignoring robots.txt files. I also think it's harmful to use output from LLMs without human review, especially if there are safety issues. Are those the harms you are talking about?

0

If you have a fediverse account, you can quote this note from your own instance. Search https://cosocial.ca/users/evan/statuses/114364452152357964 on your instance and quote it. (Note that quoting is not supported in Mastodon.)