Haha. Apparently, LLMs not only cannot model absence of information, they also cannot model negation.l, according to some MIT boffins finding that out for visual models and image captions.

A bit of Schadenfreude (and new insights for model-poisoning approaches, maybe?).

The conclusion, though, is certainly no climax:

… if we blindly apply these models, it can have catastrophical consequences

Nothing new there, to the educated reader anyway.

0

If you have a fediverse account, you can quote this note from your own instance. Search https://toot.mirbsd.org/users/mirabilos/statuses/01JWKJSKR041DYG6012NPTB3B5 on your instance and quote it. (Note that quoting is not supported in Mastodon.)