he goes on later to say:

I get the distinct impression that the entire field was assuming that we were going to have to build a lot more into LLMs before they'd be capable of full consciousness

this is just arrogant. experiential consciousness requires the capability to self-reflect. yes, a 200k token context window is probably larger than the working memory of most humans, but that does not equate to human-level experiential consciousness.

LLMs do not and can not understand consequence, which is a fundamental requirement for experiential consciousness.

in other words, your pet dog or cat at home has more experiential consciousness than an LLM.

0
0
0

If you have a fediverse account, you can quote this note from your own instance. Search https://social.treehouse.systems/users/ariadne/statuses/116122797494039916 on your instance and quote it. (Note that quoting is not supported in Mastodon.)