This article is both validating and infuriating.

Validating because it speaks to the lie that generative models are "the worst they'll ever be."

One popular leaderboard...indicates some “reasoning” models – including the DeepSeek-R1 model from developer DeepSeek – saw double-digit rises in hallucination rates compared with previous models from their developers.

The myth of incessant improvement needs to die.

Infuriating because it concludes that "We may have to live with error-prone AI." We really, really don't. This technology has one primary purpose: driving the data and compute hoarding of the oligarchy. It should be considered a tool of the oppressor, and as such should be resisted, confounded, and broken.

We do not have to accept this.

newscientist.com/article/24795

0
0
0

If you have a fediverse account, you can quote this note from your own instance. Search https://infosec.exchange/users/mttaggart/statuses/114483742490749346 on your instance and quote it. (Note that quoting is not supported in Mastodon.)