What I really dislike in the "LLM makes mistakes just like humans" line of argument is that in its foundation there is one specific principle:

"Bad answer is better than no answer. Badly done work is better than the work not done at all"

It is older than AI/LLM discussion, and I always hated seeing it applied in practice.

There are some situations and places where it is valid.

But it is not some kind of universal law of nature. You can not scale it up unconditionally with no consequences.

0

If you have a fediverse account, you can quote this note from your own instance. Search https://floss.social/users/bookwar/statuses/116130870408723165 on your instance and quote it. (Note that quoting is not supported in Mastodon.)