When I was on the Windows accessibility team at Microsoft, I don't recall us spending any time trying to make UI Automation easier for third-party developers to implement correctly. I didn't begin to become aware of the pitfalls myself until two years into my time there, when I was fortuitously introduced to a third-party developer who wanted our help with their UIA implementation during the company Hackathon that year.
It's really tempting to retreat from this reality and dream of a future where AI magically solves everything, by enabling a screen reader in the literal sense, i.e. something that interprets the pixels on the screen. I've done this multiple times over the past couple of years, often with the help of an LLM, that is, writing escapist sci-fi stories where this happens one way or another. I'm trying to stop doing that, because I recognize the problems with both generative AI and escapism.
If you have a fediverse account, you can quote this note from your own instance. Search https://toot.cafe/users/matt/statuses/114173283944111183 on your instance and quote it. (Note that quoting is not supported in Mastodon.)