This is a great summary by Rohit Kumar Thakur about the Apple paper “The Illusion of Thinking”

ninza7.medium.com/apple-just-p

The researchers asked LLM and LRM to solve well known problems like the Tower of Hanoi with a setup that the models very likely never encountered during their training (e.g. with 10 disks instead of 7) and, unsurprisingly, the models failed miserably.

If you don’t want to subscribe to Medium there is an archived copy that you can read as well: archive.ph/ASo9a

0

If you have a fediverse account, you can quote this note from your own instance. Search https://mastodon.social/users/hectorjcorrea/statuses/114671871495059265 on your instance and quote it. (Note that quoting is not supported in Mastodon.)