Before you get into it, the caveats are there in the post. You'll hear me critique the AI industry *a lot*, and those critiques haven't changed. I'm still concerned about effects on the environment, on skill decline, on the DDoS'ing of the internet, and especially on disempowerment *generally*. All that remains true.

This is going to be a somewhat niche post for people who are particularly interested in neurosymbolic computation, which includes me: the idea that neither LLMs nor constraint solvers are sufficient, that the right path for many things combines them.

@cwebberChristine Lemmer-Webber the only conversation I've ever had with someone who works on one of the "foundation models" (anthropic) that didn't leave me wanting to commit acts outside my morals and ethics, was with someone who thought the entire direction of more compute and more data was fundamentally flawed and what was necessary was something not dissimilar to what you're describing about Winter. In particular a family of kernels within the language model that enables it to interrogate its own training. He was clear that he didn't mean "intelligence" but instead simply that it was capable of producing cogent, real, explanations whether in natural language or not of it's behavior which could be used as part of a feedback loop to refine output and internal representations as well as give humans the opportunity to understand the nature of a response.

I found the post pretty interesting and share your concerns about LLMs.

0

If you have a fediverse account, you can quote this note from your own instance. Search https://spookygirl.boo/notes/aitlkdw6t56tjl81 on your instance and quote it. (Note that quoting is not supported in Mastodon.)