LLMs are *absolute concentrated brain poison* for these folks. They try out the LLM to see if it can solve a few simple problems and then they extrapolate to more complex problems. Wrongly. They infer from social cues in their cohort, which are absolutely fucked by the amount of synthetic money (and maybe fraud?) driving a subprime-bubble type mania. They infer from the plausibility of its outputs, which are absolutely fucked because the job of these models is to produce plausible outputs.
I've said it before, but the antivenin for this brain poison is to boot up #Ollama and try some really small #LLMs. With a sufficiently small model, it's completely obvious that the machine has no understanding, no consciousness, no intelligence, and no mental model of the problem it's being asked to solve. That insight equips the user to interact with a larger #LLM and not be bamboozled by its plausibility.
If you have a fediverse account, you can quote this note from your own instance. Search https://infosec.space/users/CppGuy/statuses/115593429717147081 on your instance and quote it. (Note that quoting is not supported in Mastodon.)
