Reading about the Collingridge dilemma (en.wikipedia.org/wiki/Collingr). Essentially, by the time we have enough understanding of a technology, it may be so "locked in" to society that it is very difficult to change anything about it, largely due to vested interests and incentives. Before a technology goes to market we may have some knowledge, but not enough proof to slow or stop the tech.

A lot of startups manipulate this space - think Uber and Airbnb - and we are seeing the attempts at it with AI.

So much of my focus in tech industry is on this exact thing: how do we anticipate negative consequences and course correct before they are embedded and contribute to social harm?

Many problems that new tech products introduce are easily predictable: cops using surveillance software to stalk their exes or someone exonerated, AI teaching kids how to make meth, etc.

Companies should anticipate these issues and do something about them - and be held accountable when they fail.

0

If you have a fediverse account, you can quote this note from your own instance. Search https://xoxo.zone/users/Ashedryden/statuses/115940008851018256 on your instance and quote it. (Note that quoting is not supported in Mastodon.)