What I really dislike in the "LLM makes mistakes just like humans" line of argument is that in its foundation there is one specific principle:
"Bad answer is better than no answer. Badly done work is better than the work not done at all"
It is older than AI/LLM discussion, and I always hated seeing it applied in practice.
There are some situations and places where it is valid.
But it is not some kind of universal law of nature. You can not scale it up unconditionally with no consequences.