The question is not whether you can create software using LLMs - you can (most software is just boring CRUD shit).
But you do pay a hefty price: In lowering quality (security issues, less maintainable), in skill decay in the people "guiding" the stochastic parrots, etc.

It's not "can 'AI's create software" but "are we willing to accept worse software running more and more of our lives?"

0
14
1

If you have a fediverse account, you can quote this note from your own instance. Search https://tldr.nettime.org/users/tante/statuses/116170611163297953 on your instance and quote it. (Note that quoting is not supported in Mastodon.)

RE: tldr.nettime.org/@tante/116170

And if disruption theory has shown us anything, worse (PC vs. Mainframe, phone vs laptop), can definitely be better if it brings a substantial upside.

Aside from putting LLMs into any decision making processes, this has long been my biggest fear: that we accept that all shit must be half broken now in the name of progress.

0