The thing that transforms large-language-model "AI"¹ from "a bad idea"² to "a fatal idea"² is "self-hosting"²: in other words, the idea the people making the AI can "program"¹ the AI by way of inputs to the AI.

This idea is foundational to the OpenAI approach, and it does not work, but the model tech is good enough at lying³ that in a formal QA setting it appears to work.

tenforward.social/@SeanAloysiu

¹ Scare quotes
² Regular quotes
³ Yes, an unthinking object can lie. It just can't tell the truth.

0

If you have a fediverse account, you can quote this note from your own instance. Search https://mastodon.social/users/mcc/statuses/115651515512368093 on your instance and quote it. (Note that quoting is not supported in Mastodon.)