On-device AI changes how people behave with sensitive data. I noticed this while building a therapy prep voice agent
On-device AI changes how people behave with sensitive data. I noticed this while building a therapy prep voice agent

On-device AI changes how people behave with sensitive data. I noticed this while building a therapy prep voice agent

Something worth discussing in the context of where AI is heading.

I built a voice agent for therapy prep. It runs a conversation before your session, surfaces what’s on your mind, generates a brief. The entire stack runs on-device using Apple Intelligence. No cloud inference, no data leaving the phone.

What I didn’t expect: people interact differently when they know inference is local. The same person who’d hesitate to type their pre-therapy thoughts into a cloud app will speak freely when they know nothing is transiting a server. It’s not just a privacy preference. It changes the depth of what people are willing to share with an AI agent.

Curious whether others building AI products have noticed behavioral differences based on where inference happens.

App is called Prelude if anyone wants context: https://apps.apple.com/us/app/prelude-therapy-prep/id6761587576

submitted by /u/Emojinapp
[link] [comments]