Unfortunately, even AI experts don’t actually understand why LLMs work at all. This core truth led to an epiphany: as LLMs are already maximally incomprehensible, obfuscation cannot make them more so. So, in the spirit of the IOCCC we can set aside the futility of aiming for comprehensibility, and instead focus purely on size.

https://www.ioccc.org/2024/cable1/

0

If you have a fediverse account, you can quote this note from your own instance. Search https://hackers.pub/ap/notes/01986fa6-6a59-7125-aff0-318b293b5c76 on your instance and quote it. (Note that quoting is not supported in Mastodon.)