I'm curious how much collective responsibility we have for pointing out LLM output when we see it.
I, for one, am glad when someone identifies that an image or video has been generated. I'm not as good as others at identifying those. It helps me learn, but it also helps me not promote/boost such content.
I am, however, familiar with the patterns of LLM text. I'm a little flabbergasted when nobody says, "Why did this have to be AI?" or "This is AI" or "You could have just quoted from the article and linked to it instead of putting it wholesale into an LLM and generating a summary before providing the link."
Do people not see it or don't they care?
Do I (or any of us) say something for the sake of others, or just block?
Does it help you when people identify it when you don't see it?