Sorry if this is really obvious to you all, but after my rant and reading the verge article that discusses thinking vs. Language and how that limits LLMs, as a non user of these, I had a bit of an epithany, a very small one, mind.
LLMs are, basically, large corpuses of text you can query, right? As an old person, this sounds like a search engine, but they seem, from what I hear, really bad at reliable retrieval.
People often claim that these models can somehow "synthesize" stuff. (1/n)