The matter of trusting programs that have all or part of the code generated by #AI is one of those things that, I think, is going to end up being solved only when companies shipping it - where bugs and security flaws and objectionable/illegal materials created by it - are subject to steep legal repercussions
We can't audit code, and human code has bugs too. But human code isn't able to be responsibility-laundered so easily as AI is now.
In the end; value is trust, and trust is accountability.