Mental model mismatches can either be the easiest or hardest thing to overcome when teaching. In the easy case, the person you're teaching gets a win for free where something they thought was a problem turns out not to be a problem at all. In the hard case, you need to walk backwards to try and find all the assumptions and beliefs that the person might not even know they have, which can be painful, embarrassing, and hard to unlearn.
This is one of the biggest and saddest problems with using LLMs to learn. The LLM is not capable of doing what a human can, "ah, the way you have formulated the question shows me that there is some mismatch between your mental model of the thing and the thing itself, so before I try and answer that question we need to back up." Since the LLMs are explicitly trained to produce text that statistically resembles an answer to the question posed, and they are additionally conditioned to be complimentary and congenial, they not only routinely fail this basic part of teaching, but actively make it worse by reinforcing the unstated beliefs behind the framing of the question.
Now when I am teaching someone something and I know they use LLMs, I have an additional step where I have to back up and ask them to show me what the LLM told them, because now they have two problems - the original mental model mismatch, and some garbage that was presented to them as a highly confident true answer.