Sorry for linking to Substack, but this one is so very good:
garymarcus.substack.com/p/a-kn

A few excerpts:

Apple has a new paper; it’s pretty devastating to LLMs.

Whenever people ask me why I (contrary to widespread myth) actually like AI, and think that AI (though not GenAI) may ultimately be of great benefit to humanity, I invariably point to the advances in science and technology we might make if we could combine the causal reasoning abilities of our best scientists with the sheer compute power of modern digital computers.

What the Apple paper shows, most fundamentally, regardless of how you define AGI, is that LLMs are no substitute for good well-specified conventional algorithms. (They also can’t play chess as well as conventional algorithms, can’t fold proteins like special-purpose neurosymbolic hybrids, can’t run databases as well as conventional databases, etc.)

A worthwhile article by Gary Marcus for people interested in LLMs and the limitations of LLMs.

Open the article and search for 'Hanoi'.

The well-known Tower of Hanoi is a puzzle that can be solved by junior programmers, but above 7 floors (disks), LLMs simply crumble under the complexity.

As Marcus says: "AI is not hitting a wall. But LLMs probably are."

@jeridansky

garymarcus.substack.com/p/a-kn

A picture of the Tower of Hanoi puzzle.

A puzzle with three pins and multiple disks on the left pin. The bottom is the largest disk, and each higher disk is smaller in diameter than the one below.

You have to create the same tower on the right pin but:
- you can only move one disk at a time
- can cannot place a larger disk on top of a smaller disk.
0

If you have a fediverse account, you can quote this note from your own instance. Search https://mastodon.world/users/paulschoe/statuses/114658710458037101 on your instance and quote it. (Note that quoting is not supported in Mastodon.)