I'm taking a bunch of mental notes about the utility of LLMs, and one thing I keep coming back to is how they evade any means of measurement of effectiveness. NOW: We know that for any probablistic process, if you compose it repeatedly, the probability of the "right answer" approaches 0.

0

If you have a fediverse account, you can quote this note from your own instance. Search https://bsky.brid.gy/convert/ap/at://did:plc:atgtadisf6zdwuv3bheeee67/app.bsky.feed.post/3mdns5zmoa22l on your instance and quote it. (Note that quoting is not supported in Mastodon.)