In recent weeks there have been a number of examples of Erdos problems that were solved more or less autonomously by an AI tool, only to find out that the problem had already been solved years ago in the literature: https://www.erdosproblems.com/897 https://www.erdosproblems.com/333 https://www.erdosproblems.com/481 .
One possible explanation for this is contamination: that the solutions to each of these problems were somehow picked up by the training data for the AI tools and encoded within its weights. However, other AI deep research tools failed to pick up these connections, so I am skeptical that this is the full explanation for the above events.
My theory is that the AI tools are now becoming capable enough to pick off the lowest hanging fruit amongst the problems listed as open in the Erdos problem database, where by "lowest hanging" I mean "amenable to simple proofs using fairly standard techniques". However, that category is also precisely the category of nominally open problems that are most likely to have been solved in the literature, perhaps without much fanfare due to the simple nature of the arguments. This may already explain much of the strong correlation above between AI-solvability and being already proven in some obscure portion of the literature.
This correlation is likely to continue in the near term, particularly for problems attacked purely by AI tools without significant expert supervision. Nevertheless, the amount of progress in capability of these tools is non-trivial, and bodes well for the ability of such tools to automatically scan through the "long tail" of underexamined problems in the mathematical literature.