Sorry if this is really obvious to you all, but after my rant and reading the verge article that discusses thinking vs. Language and how that limits LLMs, as a non user of these, I had a bit of an epithany, a very small one, mind.

LLMs are, basically, large corpuses of text you can query, right? As an old person, this sounds like a search engine, but they seem, from what I hear, really bad at reliable retrieval.

People often claim that these models can somehow "synthesize" stuff. (1/n)

0

If you have a fediverse account, you can quote this note from your own instance. Search https://ruby.social/users/halfbyte/statuses/115620796621708241 on your instance and quote it. (Note that quoting is not supported in Mastodon.)