As hyped as Claude Code + Obsidian is, I have no desire to upload my personal unencrypted vault data to Anthropic or anyone else.

I hope that the ideas of private inference and confidential computing that Moxie described take off. It's how all LLMs should work.

Confidential computing
This is the domain of confidential computing. Confidential computing uses hardware-enforced isolation to run code in a Trusted Execution Environment (TEE). The host machine provides CPU, memory, and power, but cannot access the TEE’s memory or execution state.

LLMs are fundamentally stateless—input in, output out—which makes them ideal for this environment. For Confer, we run inference inside a confidential VM. Your prompts are encrypted from your device directly into the TEE using Noise Pipes, processed there, and responses are encrypted back. The host never sees plaintext.

But this raises an obvious concern: even if we have encrypted pipes in and out of an encrypted environment, it really matters what is running inside that environment. The client needs assurance that the code running is actually doing what it claims.
0

If you have a fediverse account, you can quote this note from your own instance. Search https://mastodon.social/users/kepano/statuses/115888951400371463 on your instance and quote it. (Note that quoting is not supported in Mastodon.)