LLMs have no model of correctness, only typicality. So:

“How much does it matter if it’s wrong?”

It’s astonishing how frequently both providers and users of LLM-based services fail to ask this basic question — which I think has a fairly obvious answer in this case, one that the research bears out.

(Repliers, NB: Research that confirms the seemingly obvious is useful and important, and “I already knew that” is not information that anyone is interested in except you.)

1/ 404media.co/chatbots-health-me

0

If you have a fediverse account, you can quote this note from your own instance. Search https://hachyderm.io/users/inthehands/statuses/116041626097492540 on your instance and quote it. (Note that quoting is not supported in Mastodon.)

0