Skip to content

Designing trust for a system that never stops listening.

Problem

Users said no before seeing what the product did.

Insight

Sequencing drove trust more than explanation.

Fix

Moved permission to after the first product experience.

Impact

86%

always-on opt-in · threshold was 50%

40 users
2 weeks

Role

Lead Product Designer · 0 to 1

Scope

End-to-end · research to shipped v1

With

Mobile eng · hardware eng · AI · product

Listening
Paused
Muted
Off
CAIPO wearable device
Onboarding screen showing AI summary before permissions are requested
Dashboard with persistent Listening status chip
Voice capture screen
Control center with one-tap pause
The Problem

Most apps function without full permissions.

CAIPO doesn't.

For CAIPO to work, users have to turn on always-on listening. Without it the product does nothing at all. People understood what it did. They just didn't trust it enough to say yes.

The Insight

The concern wasn't confusion. It was discomfort.

Early conversations · 8 participants · exploratory

"Is it recording me all the time?"

"When is it actually listening?"

"Can I turn it off easily?"

"How do I know what it captured?"

Every question was really the same question: can I see what it's doing, and can I make it stop.

People don't say no to always-on because they don't get it. They say no because it doesn't feel like something they can control.

The Shift

Three approaches before the right one.

01
Failed

Granular Controls

9:41

CAPTURE MODE

Meetings only

Calls only

Custom hours

Always on

More options without context created paralysis, not clarity.

02
Failed

Better Privacy Explanation

9:41

How CAIPO protects you

Always listening

On-device only

Private by default

Nothing leaves your phone

Delete anytime

Full control in settings

"CAIPO" Would Like to Access the Microphone

Allow CAIPO to always listen in the background.

Don't Allow
Allow

People understood the product better. They still didn't enable capture. Understanding and trust are different things.

03
Failed

Reactive Indicators

9:41

CAIPO

9:03 · IDLE

9:04 · CAPTURING

Voice activity detected

9:04 · IDLE AGAIN

Random flashes of activity felt more unsettling than silence. Users wanted a constant state, not a surprise.

We kept trying to fix the moment of permission. The actual problem was everything that came before it.

The Design

Three decisions. Each one earned.

Addressing · Is it worth trusting?

Value-First Sequencing

Show what the product produces before asking for permission. The consent screen moves to after the preview.

Before

Install → Consent → Product

After

Install → Preview → Consent

Tradeoff

One extra step before consent. Tracked Day 3 retention to prove the friction was worth it.

AI output shown before permission is requested

AI output shown first

Consent screen appears after the user has seen value

Consent after preview

Addressing · Is it on right now?

Persistent Capture State

A status chip at the top of every screen. Always visible. Answers the question users kept asking: is it on right now?

Off
Active
Paused
Muted
Tradeoff

Permanent top-bar real estate. A chip that only appears while capturing would create a new ambiguity.

App screen with green Listening chip

Listening

App screen with orange Paused chip

Paused

App screen with red Muted chip

Muted

Addressing · Can I stop it?

Device Control Center

If exit feels hard, users assume they can't leave. That breaks the whole model.

Tapping the status chip opens a control center where the user can pause in one tapTap chip · control center opens · pause in one tap
The Result · 40 Internal Users · 2 Weeks

Nobody finished the pilot unsure whether their device was on.

That was the one thing I really needed to get right.

Always-On Adoption

0%

Viability threshold was 50%. Including participants who pushed back hardest on privacy.

Still active Day 3

0%

People stayed. The product kept earning their trust after setup.

Used Pause Control

0%

Using pause means people felt safe doing it.

Users who paused, came back.

"I kept waiting to feel uncomfortable about it. I never did. I think it was because I could always see what it was doing."

Pilot participant · Enabled on Day 1

These were people who already knew the product. The real test is with someone who starts out skeptical.

Conflicts

Three moments where the right call wasn't obvious.

01 · Hardware Engineering

UI state and hardware state were out of sync by two seconds.

Decision

Wait for hardware handshake before updating the UI

A privacy product cannot show one state while operating in another.

App UI shows

Paused

Hardware is

Still capturing

Two seconds where both are true at once

02 · Mobile Engineering

Making pause feel safe to use, not risky to tap.

Decision

One tap to pause, no confirmation

If pause feels hard to use, users skip it. That leaves only fully on or uninstall.

Rejected

2 taps, confirm dialog

Shipped

1 tap, instant

03 · AI + Product

Scoping the onboarding preview to what actually shipped.

Decision

Show only what ships on day one

A user who opts in based on a promise the product cannot keep will not stay.

Proposed in onboarding

Not shipping

Full context summary
AI-detected decisions
Predictive insights
Cross-meeting patterns
Auto follow-up drafts
Ships on day one

Delivered

Meeting summary
Action items
Open questions
What's still open

Three questions before I'd scale this.

External Trust

Does this hold for users who arrive skeptical?

Everyone in this pilot already knew the product. The sequencing fix likely has less leverage with someone who distrusts the category entirely.

Indicator Fatigue

Does the chip become invisible after a few weeks?

Habituation is real. A chip that users stop registering is no longer doing its job. You'd need 90 days of data to know.

AI Transparency

Users see when capture happens. Not what the AI keeps.

Visibility into device state is one layer. Visibility into model behavior is the harder, more important next one.

What I'd build next

Making AI behavior readable.

The chip tells you the device is on. It doesn't tell you what the AI keeps, what it skips, or why. That's the next trust layer to design for.

Takeaway

In systems that require ongoing trust, sequencing determines adoption more than explanation.

I went in thinking this was a privacy problem. It was a sequencing problem. The fix wasn't a better explanation of what we were doing. It was changing what users experienced first.