Proton Sent AI Spam After Explicit Opt-Out—Voice AI for Demos Proves Consent Beats Coercion
# Proton Sent AI Spam After Explicit Opt-Out—Voice AI for Demos Proves Consent Beats Coercion
A user explicitly opted out of Proton's Lumo (AI) emails.
Proton sent him "Try Lumo's powerful new feature" anyway.
When he complained, Proton Support claimed it wasn't *technically* a "Lumo product update" email—it was a "Business newsletter" email that *happened* to be entirely about Lumo, from @lumo.proton.me, with "Try Lumo" in the subject line.
**The AI industry can't take "no" for an answer.**
The HN discussion (88 points, 37 comments in 1 hour) reveals this isn't isolated: GitHub auto-opted users into Copilot emails. Mozilla forced Firefox Nightly AI features. Meta trained on Instagram posts without meaningful consent.
But there's a pattern here that shows Voice AI for demos gets it right:
**Consensual AI succeeds by respecting user choice. Coercive AI fails by ignoring explicit rejection.**
Voice AI for demos works because users *choose* to interact ("Show me the dashboard"). Proton's Lumo spam fails because users explicitly said "no" and got it anyway.
Consent isn't just better ethics—it's better architecture.
## The Proton Spam: When "No" Means "Try Harder"
Here's what happened to David Bushell (the author):
**Jan 14, 2026:** Proton sends email with subject line "Introducing Projects - Try Lumo's powerful new feature now"
**The problem:** David had explicitly unchecked the "Lumo product updates" toggle in his account settings. He opted out. He gave an undeniable "no."
**Proton Support's response (paraphrased):**
1. First reply: "Here's how to opt out" (pointing to the *same toggle he already unchecked*)
2. Second reply: "Send us screenshots"
3. Third reply: "This isn't a Lumo email, it's a Business newsletter"
**Translation:** "We know you said no to Lumo emails, but we found a loophole."
The email was:
- Subject: "Try Lumo"
- From: "Lumo" <@lumo.proton.me>
- Content: Entirely about Lumo features
But Proton claims it's not a "Lumo product update" because they categorized it under "Business newsletter."
**This is the AI consent problem in microcosm:** When users say "no," AI companies redefine what "no" means.
## The Industry Pattern: AI Treats Rejection as a Bug to Route Around
Proton's spam isn't an anomaly—it's the industry standard. David Bushell's update reveals the pattern:
**GitHub/Microsoft (Jan 23, 2026):**
- David never opted into GitHub newsletters
- Microsoft auto-opted him into GitHub Copilot emails
- "Build AI agents with the new GitHub Copilot SDK"
- Unsubscribe link reveals hidden newsletter list with Copilot pre-checked
**Other examples from HN comments:**
**Mozilla Firefox:**
- Enabled AI features in Firefox Nightly without clear opt-in
- Users discovered experimental AI after updates
- Opt-out buried in about:config settings
**Meta/Instagram:**
- Trained AI models on user posts
- Consent form: "Agree or delete your account"
- UK/EU users could opt-out (GDPR), US users couldn't
**AI web scrapers:**
- DDoS websites ignoring robots.txt
- Lie about user-agents to bypass blocks
- Perplexity AI caught [lying about user-agent](https://rknight.me/blog/perplexity-ai-is-lying-about-its-user-agent/)
**The pattern:** The AI industry fundamentally does not accept "no" as a valid answer.
## Why This Fails: Coercion Creates Hostility, Consent Creates Adoption
Here's the business logic behind ignoring user consent:
**Coercive AI thinking:**
1. AI features have low organic adoption
2. If we ask for consent, users will say "no"
3. Therefore, we must not ask for consent
4. Force AI features on users → hope they learn to love it
**Actual result:**
1. Users feel violated ("I explicitly opted out")
2. Users develop AI hostility ("Every company forces this on me")
3. Users actively avoid AI features (even useful ones)
4. Brand damage ("Proton ignores user preferences")
Proton's approach assumes users who say "no" to Lumo just haven't *experienced* Lumo yet. If they could force one email through, users might convert.
But the opposite happens: The user now associates Lumo with *violating his explicit preferences*. That association is permanent.
## The Voice AI Parallel: Consent as Architecture, Not Afterthought
Voice AI for demos succeeds because consent isn't a legal requirement bolted on—it's the fundamental interaction model:
**How Voice AI respects consent:**
1. **User initiates interaction:**
- User clicks "Try Demo"
- User sees voice interface
- User *chooses* to speak or not speak
2. **Every step requires explicit action:**
- "Show me the dashboard" → Voice AI navigates
- "What does this button do?" → Voice AI explains
- User stops speaking → Voice AI stops guiding
3. **Opt-out is instant and obvious:**
- Close demo window = done
- No hidden settings
- No "Business newsletter" loopholes
- No "you opted out of *voice guidance* but not *demo assistance*" nonsense
**Contrast with coercive AI:**
**Proton Lumo:**
- User unchecks "Lumo product updates"
- Proton sends Lumo email anyway
- Proton claims it's technically a different category
**Voice AI:**
- User says nothing
- Voice AI does nothing
- No email arrives later saying "Try voice guidance!"
The architecture is fundamentally different. Voice AI *cannot* violate consent because the entire system is built on user-initiated requests.
## The Three Reasons Consensual AI Scales Better Than Coercive AI
### 1. Consent Eliminates the "Did They Really Mean No?" Problem
**Coercive AI problem:** User unchecks "Lumo updates" but Proton wonders:
- Did they understand what Lumo is?
- Would they like it if they tried it?
- Can we reframe this as a "Business update"?
- What if we send *one more* email?
**Result:** Proton sends spam, user complains, brand damage.
**Consensual AI approach:** User doesn't click "Try Voice Demo"
- Voice AI: *does nothing*
- No need to interpret intent
- No need to find loopholes
- No brand damage
**Scaling advantage:** As user base grows, coercive AI generates more complaints (linear scale). Consensual AI generates zero complaints (constant).
### 2. Consent Turns Adoption Into a Feature, Not a Metric to Game
**Coercive AI incentive structure:**
- Company wants AI adoption
- Users don't want AI
- Solution: Force AI on users, measure "engagement"
- Metric: "80% of users interacted with AI" (because they couldn't avoid it)
**Problem:** High "engagement" doesn't mean users *value* the AI. It means they couldn't escape it.
**Consensual AI incentive structure:**
- Company wants AI adoption
- Users choose whether to try AI
- Solution: Make AI valuable enough that users choose it
- Metric: "50% of users *chose* to try Voice AI" (because it was useful)
**Scaling advantage:** Coercive AI creates resentment that compounds. Consensual AI creates satisfaction that compounds through word-of-mouth.
### 3. Consent Makes Regulatory Compliance Automatic, Not an Adversarial Game
**Coercive AI vs. GDPR:**
- User opts out of Lumo emails
- Proton finds "Business newsletter" loophole
- User complains: "This is spam"
- Proton argues: "Technically it's not spam because..."
- Result: Legal gray area, user hostility
**Consensual AI vs. GDPR:**
- User doesn't interact with Voice AI demo
- Voice AI collects no data (no voice input, no session logged)
- No GDPR concerns (no data = no privacy violation)
- Result: Automatic compliance, zero legal risk
**Scaling advantage:** As privacy regulations tighten, coercive AI requires constant legal reinterpretation. Consensual AI remains compliant by default.
## The Hidden Cost of "Just One More Email": Permanent Brand Association
Proton's spam reveals a fundamental misunderstanding of how users form brand associations:
**Proton's calculation:**
- Cost of one spam email: Low (some users annoyed)
- Benefit of one spam email: Possible Lumo conversions
- Net: Worth trying
**Actual cost:**
- User's permanent association: "Proton violates my preferences"
- User's AI association: "AI companies ignore my 'no'"
- Competitive vulnerability: "If Proton doesn't respect opt-outs, maybe I should switch email providers"
David Bushell's article is now indexed on Hacker News, Google, and his blog. Thousands of people now associate "Proton" with "ignores user preferences to push AI."
**That association is permanent.**
Voice AI avoids this entirely. No user will ever write "Voice AI for demos sent me spam after I opted out" because the architecture makes it impossible.
## The GitHub Update: Microsoft Proves This Is Industry-Wide
David's update (Jan 23) shows this isn't a Proton-specific problem:
**GitHub/Microsoft:**
- User never opted into newsletters
- Microsoft auto-enables GitHub Copilot emails
- Opt-out buried in hidden "Opt-Out Preferences" page (not visible in account settings)
- User discovers it only after receiving spam
**The pattern:** Companies assume "no explicit opt-in" = "consent to opt-in later."
This is the inverse of GDPR's principle: Consent should be:
- Freely given (not coerced)
- Specific (not bundled)
- Informed (user knows what they're consenting to)
- Unambiguous (clear affirmative action required)
Microsoft's approach violates all four:
- Not freely given (auto-opted-in)
- Not specific (hidden in account creation)
- Not informed (user didn't know Copilot existed when they created account)
- Not unambiguous (pre-checked box, not affirmative action)
**Voice AI contrast:**
- User must click "Try Demo" (freely given)
- User must speak commands (specific to each action)
- UI clearly shows voice interface (informed)
- Each command is explicit user action (unambiguous)
## The Verdict: Consent as Architecture, Not Compliance Theater
Proton's spam proves a simple thesis:
**When AI is built on coercion, every "no" becomes a problem to solve.**
Proton user says "no to Lumo emails" → Proton redefines what "Lumo emails" means.
GitHub user doesn't opt-in to newsletters → GitHub opts them in anyway.
Mozilla user doesn't enable AI → Mozilla enables it automatically.
**When AI is built on consent, every "no" is respected automatically.**
Voice AI user doesn't speak → Voice AI does nothing.
Voice AI user closes demo → Voice AI stops collecting data.
Voice AI user doesn't click "Try Demo" → No spam email arrives later.
The difference isn't just ethics—it's architecture. Consensual AI can't violate user preferences because the system only activates on explicit user action.
---
**Key Takeaways:**
1. Proton sent AI spam to user who explicitly opted out of Lumo emails
2. Proton Support claimed it was technically a "Business newsletter," not a "Lumo update"
3. GitHub/Microsoft auto-opted users into Copilot emails without consent
4. Pattern: AI industry treats "no" as a bug to route around, not a preference to respect
5. Voice AI for demos succeeds because consent is architectural (user-initiated commands)
6. Coercive AI creates resentment that compounds; consensual AI creates satisfaction that compounds
7. Consent makes GDPR compliance automatic instead of adversarial
8. Violating user preferences creates permanent negative brand associations
**Meta Description:**
Proton sent AI spam after user explicitly opted out, claiming it was technically a "Business newsletter." GitHub auto-opted users into Copilot emails. Voice AI for demos proves consensual AI (user-initiated commands) beats coercive AI (ignoring explicit rejection). Learn why consent as architecture succeeds.
**Keywords:** Proton Lumo spam, AI consent problem, GDPR email marketing, GitHub Copilot opt-in, Mozilla Firefox AI, Voice AI consent architecture, user preference violation, forced AI features, consensual AI design, explicit opt-out ignored
← Back to Blog
DEMOGOD