Honest question: do any of you (who are trained in software engineering) get reasonable code out of LLMs? If yes, how?
Sure, the tech bros tell us it's amazing. But lately also people whose judgement I would trust more and who work in the industry say that they've been productively using LLM generated code. So I thought maybe I should test[1] it again to see where the tech we keep criticizing stands and ... the results were pretty bad? Like, it had actual errors and when I fixed those it didn't do what it should.
It can't be that stupid, I must be prompting it wrong!
So, any pointers what I'm doing wrong or is the level of dillusion really this crazy or did I have bad luck?
[1] I pasted a not particularly long MicroPython script into the chat interface of the free version of both ChatGPT and Claude and asked to have the bit-banging part replaced by PIO code.