I'm not terribly familiar with Yann LeCun and his work, but his opinion as expressed in this article is one I agree with - llm are a local maximum, and there is no path where this leads to a "genai" or "superintelligence" consequence, regardless how sophisticated you make your models nor however much processing power you throw at them.

futurism.com/artificial-intell

Thinking is not language-based.

Language is an API by which people approximate the concepts in their heads to each other.

The semantic connections between words and concepts are not fixed; they're fairly sloppy, with a high degree of tolerance in how they fit together.

This is a feature; this is how poetry works, for instance.

This is also why, in areas such as law and medicine, the practitioners have fossilized specific semantic connotations and relationships using extremely specific jargon that you don't find outside of those fields, and frequently use Latin - a language not subject to the same forces of semantic drift as English, due to the paucity of normal speakers of it - to ossify those concepts and keep them consistent.

Starting -from- language and working backwards to the underlying conceptual framework is the opposite of how humans learn in the first place; infants learn basic facts about the world during their early life, and then they are taught the external cues that allow for communicating facts about the world with their caretakers through consistent conditioning - same way you teach a dog to sit; you associate the condition with the word 'sit' and thus achieve instruction.

While llms are certainly a clever way to create the impression of "understanding" it is, ultimately, a trick - the only 'understanding' comes from the human side; Clever Hans is not doing math at all, but engaging in a fuzzing of human responses to get the sugarcube.

0

If you have a fediverse account, you can quote this note from your own instance. Search https://infosec.exchange/users/munin/statuses/115555311780408966 on your instance and quote it. (Note that quoting is not supported in Mastodon.)