There will never be an AI tool that
is truly private unless it hasn't trained on nonconsensual data.

Even if a platform were able to
create the perfect protections for its users' prompts and results,

If the platform is built from or utilizing an AI model that was trained on or is updated and optimized with data that was scraped from millions of people without their consent, then of course this platform isn't "privacy-respectful."

How could it be?

The company is saying:
"We respect the privacy of our users while they are using our platform, but outside of it, it's fair game."

Users thinking they are using a privacy-respectful platform are in fact saying:

"Privacy for me and not for thee,"

And are directly contributing to the platform needing to scrape even more nonconsensual data to improve.

Always ask: Where the training data comes from?

Without the assurance that a platform only uses AI models that have only been training on data acquired ethically, it is not a privacy-respectful platform.

0

If you have a fediverse account, you can quote this note from your own instance. Search https://masto.nyc/users/gbargoud/statuses/115894279652997447 on your instance and quote it. (Note that quoting is not supported in Mastodon.)