Haha. Apparently, LLMs not only cannot model absence of information, they also cannot model negation.l, according to some MIT boffins finding that out for visual models and image captions.
A bit of Schadenfreude (and new insights for model-poisoning approaches, maybe?).
The conclusion, though, is certainly no climax:
… if we blindly apply these models, it can have catastrophical consequences
Nothing new there, to the educated reader anyway.