I have this intuition that LLMs are infohazards because of some spooky statistical relationship between model weights and human neural activity. Human brains react to stimuli and potentiate neurons in response; that's what learning is. Some stimuli cause potentiation and others don't. Humans also react to language; that's hardly a controversial claim in neuroscience.

0

If you have a fediverse account, you can quote this note from your own instance. Search https://mastodon.social/users/glyph/statuses/115641594620333345 on your instance and quote it. (Note that quoting is not supported in Mastodon.)