Thinking is not language-based.
Language is an API by which people approximate the concepts in their heads to each other.
The semantic connections between words and concepts are not fixed; they're fairly sloppy, with a high degree of tolerance in how they fit together.
This is a feature; this is how poetry works, for instance.
This is also why, in areas such as law and medicine, the practitioners have fossilized specific semantic connotations and relationships using extremely specific jargon that you don't find outside of those fields, and frequently use Latin - a language not subject to the same forces of semantic drift as English, due to the paucity of normal speakers of it - to ossify those concepts and keep them consistent.
Starting -from- language and working backwards to the underlying conceptual framework is the opposite of how humans learn in the first place; infants learn basic facts about the world during their early life, and then they are taught the external cues that allow for communicating facts about the world with their caretakers through consistent conditioning - same way you teach a dog to sit; you associate the condition with the word 'sit' and thus achieve instruction.
While llms are certainly a clever way to create the impression of "understanding" it is, ultimately, a trick - the only 'understanding' comes from the human side; Clever Hans is not doing math at all, but engaging in a fuzzing of human responses to get the sugarcube.