#1 - The future looks like more and more code generated by LLMs, and fewer and fewer programmers who can fix it. The "hallucination" problem's never going away, and there's no reason to believe they'll ever be capable of fixing all the bugs they generate.
#2 - Meanwhile, price plans for the most capable models are massively subsidised. At some point, prices will *have* to go up, and by at least an order of magnitude.
The race for AI companies is to make #1 a chronic problem before #2 happens.