A Pathway to Privacy from AI—while using it?

My Claude Code project taught me how

I was explaining Howler to a friend worried about privacy, and realized something I hadn’t fully thought through while building it.

Howler’s a voice messaging app with timestamped replies—you can respond to a specific instant of someone’s voice note (which is encrypted using the Signal Protocol1). Those threaded replies can then be stitched into publishable audio that sound like conversations—so quick podcasts without the production overhead. I rebuilt it in 9 days with Claude Code after first manually solo-hacking it during COVID.

Right now, when I use Claude directly, I give Anthropic access to everything—my identity, my conversation history, my patterns of usageSame if you use OpenAI’s ChatGPT.. I’m not too worried about it today, but I’m aware of the tradeoff.

As someone lacking a formal background in cryptography and privacy, I stumbled onto a different style of engaging with Claude and GPT while building an export feature in Howler—letting users share transcripts to Signal, WhatsApp, or other messengers. I was worried that in building this convenience pipeline, I would break the end-to-end encryption model and expose user identifying data. But looking at it more, I realized: when the transcript hits OpenAI for cleanup, there’s no user identity attached. Just text.

When you use cloud transcription or AI cleanup in Howler, the AI services see your content but not your identity. OpenAI gets anonymous audio. Anthropic gets anonymous text. Neither knows who you are, who you’re talking to, or anything about you beyond the words themselves.

Service What they see What they don’t see
Howler Your identity, metadata Message content (E2E encrypted)
OpenAI/Anthropic Content for processing Your identity, your contacts

Compare that to using Claude or ChatGPT directly, where your account, email, and full history are tied together.

This isn’t a novel idea. It’s the same pattern as VPNs, Apple’s Private Relay, anonymous remailers from the 90s. An intermediary strips identity from the request. Old concept, new context.

What’s mildly interesting is that I didn’t set out to build this. It fell out of other decisions—end-to-end encryption, the BYOKBring Your Own Key, which means getting your API keys from one of Anthropic or OpenAI. option, keeping the AI calls stateless. The privacy benefit is a byproduct of not collecting what I don’t need.

That’s usually how good privacy works, in my limited understanding of it. Not a feature you bolt on. Just not grabbing data you were never going to use anyway.

There’s a caveat: content can reveal identity. If your transcript says “Hi, I’m Angadh and here’s my address,” the anonymity is gone. And the separation isn’t cryptographic. Howler knows which user made which API call and when. OpenAI/Anthropic know the content and when. If both parties compared logs, they could correlate requests to users. The protection is policy: I don’t share user identity with them, I don’t log the content. That’s a trust decision, not a mathematical guarantee. But for thinking out loud, drafting ideas, cleaning up voice notes—it’s genuinely more private than talking to the AI directly.

I think I have accidentally stumbled onto a different model while building Howler—I am not saying this is novel, but, as an implementation, it feels so to meA reminder that I built this with Claude Code so there could be vulnerabilities. But I’m not running Signal—the stakes are lower for me.. A big part of writing this post is to be corrected about t where I am wrong in my assumptions. But, if I am right, then a more secure messaging app like Signal should consider bringing LLMs into their tooling via Anthropic’s and OpenAI’s APIs; it might help them attract even more users.

  1. With the caveat that I have audited some of the messages—they definitely appear encrypted in the database and one can’t see the same message on unlinked devices—but needs external auditing. 



Mentions & Discussions

Loading mentions...