@kevingranade
@cwebberChristine Lemmer-Webber to put it another way, i think why LLMs work is 1) language is highly structured, 2) human (generally) are blessed or cursed with pattern recognition instincts that make it impossible for us to not see meaningful patterns in everything 3) these two qualities make it easy to generate noise from signal to produce that noise patterns that looks indistinguishable from signal despite still being just noise
@kevingranade
@cwebberChristine Lemmer-Webber and, i think this disguised noise is hazardous to humans in the same way that it is hazardous to training sets. the skill degradation thing is really alarming and it seems to happen a lot faster than simple atrophy, and this is my theory for why. see also https://www.youtube.com/watch?v=2r6UJlXCiG0
If you have a fediverse account, you can quote this note from your own instance. Search https://mastodon.gamedev.place/users/aeva/statuses/116087400014865566 on your instance and quote it. (Note that quoting is not supported in Mastodon.)
