A state-of-the-art AI agent like GPT-5.4 still can make mistakes about basic concepts in Lean's type theory. I don't recommend using AI agents as one's first source of knowledge. See https://leanprover.zulipchat.com/#narrow/channel/219941-Machine-Learning-for-Theorem-Proving/topic/gpt-5.2E4's.20partially.20incorrect.20explanation/near/580330953.
@chabulhwiBulhwi Cha You can only make a mistake if you are capable of recognizing it as a mistake, or if at least other consider you should be capable of. Otherwise, it's just an unsupported claim.
LLM engines cannot recognize the true from the false, hence they only make unsupported claims.
It is the user who, facing those claims, can tranform them into statements that have a truth content, in a similar way to when people recognize animals when looking at clouds.
If you have a fediverse account, you can quote this note from your own instance. Search https://mathstodon.xyz/users/antoinechambertloir/statuses/116256556228215532 on your instance and quote it. (Note that quoting is not supported in Mastodon.)