The next dominoes in the AI bubble that I expect to fall (if you’d excuse the mixed metaphor):

  1. Insurance companies explicitly exclude coverage of any system using AI and any outputs of AI systems.
  2. Lawyers in big companies issue advice that using AI systems is too high risk.
  3. Big companies demand IT suppliers provide an enterprise-management system switch to disable all AI functionality in products, or provide an AI-free version.

The first is starting. A consortium of insurance companies has asked their regulator to approve this blanket exclusion. Their argument is that the risks of these systems are too unpredictable to be able to insure. They can’t reason about systemic or correlated risk if you add a bullshit generator anywhere in an operational flow.

The second has happened in a few places, but is not widespread. Some places are hedging. When I was at MS, the AI policy was basically: ‘look, we give you all of these shiny toys! Please use them! By the way, you accept all legal liability for their output! Have fun!’. One ruling that this kind of passing-the-blame-to-employees-for-correctly-using-company-provided-tools policy is unenforceable and the lawyers will get very nervous.

The third is a consequence of the first two. If your lawyers tell you something is high risk and you can’t buy insurance, you want to make sure it isn’t used.

0

If you have a fediverse account, you can quote this note from your own instance. Search https://infosec.exchange/users/david_chisnall/statuses/115620821388492710 on your instance and quote it. (Note that quoting is not supported in Mastodon.)