"Open Source Is Not About You" Applies to Voice AI Demos Too

# "Open Source Is Not About You" Applies to Voice AI Demos Too **Meta Description**: Rich Hickey's famous open source essay reveals what Voice AI demos actually owe users - and what they don't. Realistic expectations prevent user disappointment. --- ## The Essay That Keeps Coming Back In 2018, Rich Hickey (creator of Clojure) published a GitHub Gist titled "Open Source is Not About You." It's resurfaced today on HackerNews with 1,248 stars and fresh discussion. The core message is blunt: > As a user of something open source you are not thereby entitled to anything at all. You are not entitled to contribute. You are not entitled to features. You are not entitled to the attention of others. You are not entitled to having value attached to your complaints. You are not entitled to this explanation. Hickey wrote this because Clojure users were demanding features, complaining about the development process, and expecting the core team to respond to every issue. His response: **Open source is a gift. Gifts don't come with obligations.** Six years later, this essay is still generating passionate responses because **it challenges a fundamental assumption about what creators owe users.** And Voice AI demos are about to face the exact same conflict. --- ## What Open Source Actually Means Hickey's key distinction: > Open source is a licensing and delivery mechanism, period. It means you get the source for software and the right to use and modify it. All social impositions associated with it, including the idea of 'community-driven-development' are part of a recently-invented mythology with little basis in how things actually work. **Open source = license + source code.** **NOT**: Community management. Feature requests. Issue triage. Pull request reviews. Documentation. Support. Roadmap transparency. But users expect all of that. Why? Because somewhere along the way, "open source" became synonymous with "community-driven development." GitHub issues became feature request forums. Pull requests became implicit obligations to review. Maintainer silence became "ignoring the community." Hickey's point: **None of this is actually part of open source. It's mythology.** --- ## The Entitlement Paradox Here's the paradox: **Scenario 1: Proprietary Software** - User pays $100/month for tool - Tool has bugs - User: "I'm paying for this, you need to fix it" - **Everyone agrees this is reasonable** **Scenario 2: Open Source Software** - User pays $0 for tool - Tool has bugs - User: "This needs to be fixed" - Creator: "You're not paying me, I don't owe you anything" - User: **"But open source is supposed to be community-driven!"** The entitlement paradox: **Users expect more from free tools than paid tools.** Why? Because "open source" implies a social contract that doesn't actually exist in the license. Hickey's essay is saying: **Read the actual license. Everything else is your imagination.** --- ## What Hickey Actually Owes Clojure Users According to the license? **Nothing beyond the source code and right to use it.** What does he actually provide? From the essay: > Alex Miller is extremely attentive to and engaged with the Clojure community. He and Stu Halloway and I regularly meet and discuss community issues. Alex, at my direction, spends the majority of his time either working on features for the community or assessing patches and bug reports. I spend significant portions of my time designing these features - spec, tools.deps, error handling and more to come. **This is time taken away from earning a living.** Hickey and Cognitect: - Hire full-time developers to work on Clojure - Use retirement savings to fund development - Get no royalties from Clojure - Serve fewer than 1% of Clojure users as paying customers **What they provide is far beyond what the license requires.** But users still complain it's not enough. Why? Because **expectations are not set by licenses. They're set by mythology.** --- ## Voice AI Demos Have the Same Problem Now let's talk about Voice AI demos. When you load a demo agent on a website, what are you entitled to? **According to the terms of service**: Access to the demo within specified limits. **What users actually expect**: 1. Agent understands all questions perfectly 2. Agent knows everything about the product 3. Agent can perform any task user requests 4. Agent never makes mistakes 5. Agent has unlimited patience 6. Agent improves based on user feedback 7. Agent is available 24/7 with no downtime **The entitlement paradox again**: Users expect more from free demos than paid products. And when demos don't meet these expectations, users blame the demo. But here's the question: **What does a free demo actually owe you?** --- ## What Voice AI Demos Actually Owe Users (By License/ToS) Let's be specific. Most demo agents have terms that say something like: > "This demo is provided 'as-is' for evaluation purposes. We make no guarantees about accuracy, availability, or fitness for any purpose." **That's it.** **NOT included**: - Guarantee of accuracy - Guarantee of uptime - Guarantee agent knows your specific use case - Guarantee agent handles edge cases - Guarantee agent improves based on feedback - Guarantee human support if agent fails - Guarantee demo stays available indefinitely **What demos typically DO provide** (beyond legal requirements): - Agent trained on product documentation - Natural language interface - Some level of task completion - Error handling (hopefully) - Feedback mechanisms (sometimes) But users expect far more. Why? **Because "AI demo" implies capabilities that aren't actually promised.** --- ## The Eight Realistic Expectations for Voice AI Demos If Rich Hickey wrote "Voice AI Demos Are Not About You," here's what it would say: ### Expectation #1: Demos Owe You Access, Not Perfection **What you're entitled to**: The demo loads and responds to input **What you're NOT entitled to**: Perfect understanding of ambiguous requests **Example**: ``` USER: "Show me the thing" AGENT: "I don't have enough context. Which feature are you interested in?" USER: "You should know what I mean!" ``` **Reality**: The demo gave you access. It's not entitled to read your mind. --- ### Expectation #2: Demos Owe You Their Training Data, Not All Knowledge **What you're entitled to**: Responses based on what the demo was trained on **What you're NOT entitled to**: Knowledge beyond the training data **Example**: ``` USER: "What's the pricing for enterprise plans?" AGENT: "I don't have information about enterprise pricing. Would you like to contact sales?" USER: "A good demo would know this!" ``` **Reality**: If enterprise pricing isn't in the training data, the demo can't know it. That's not a failure - that's a boundary. --- ### Expectation #3: Demos Owe You Task Completion Attempts, Not Guarantees **What you're entitled to**: The demo tries to complete tasks it's designed for **What you're NOT entitled to**: Guaranteed success on every task **Example**: ``` USER: "Export all my data from the last 3 years in CSV format with custom headers" AGENT: "I can't perform complex exports. You'll need to use the dashboard for that." USER: "Why even have a demo if it can't do this?" ``` **Reality**: Demos have capability boundaries. Hitting those boundaries isn't a broken promise - it's reality. --- ### Expectation #4: Demos Owe You Error Messages, Not Apologies **What you're entitled to**: Clear error messages when something fails **What you're NOT entitled to**: Emotional validation of your frustration **Example**: ``` AGENT: "I couldn't complete that action because I don't have permission to modify user accounts." USER: "This is so frustrating! Why can't you do this?" AGENT: "I understand this is frustrating. I don't have permission to modify user accounts." USER: "Stop saying you understand! Just fix it!" ``` **Reality**: The demo told you what it can't do and why. It doesn't owe you therapy about the limitation. --- ### Expectation #5: Demos Owe You Functionality, Not Feature Requests **What you're entitled to**: The features the demo currently has **What you're NOT entitled to**: New features based on your feedback **Example**: ``` USER: "Can you add dark mode support?" AGENT: "I don't have the ability to add features. You can submit feedback to the product team." USER: "But I'm telling YOU. Why can't YOU add it?" ``` **Reality**: Demos execute existing code. They don't build new features. Expecting a demo to implement your idea is like expecting a car to modify its own engine. --- ### Expectation #6: Demos Owe You Their Design Purpose, Not Your Use Case **What you're entitled to**: The use cases the demo was designed for **What you're NOT entitled to**: Support for use cases beyond the design scope **Example**: ``` USER: "I want to use this e-commerce demo to manage my inventory for a physical retail store" AGENT: "This demo is designed for online stores. Physical retail inventory management isn't supported." USER: "But inventory is inventory! Why can't you handle both?" ``` **Reality**: Tools are built for specific purposes. Expecting a demo to support every adjacent use case is unrealistic. --- ### Expectation #7: Demos Owe You Consistency, Not Perfection **What you're entitled to**: Consistent behavior within documented capabilities **What you're NOT entitled to**: Zero failures ever **Example**: ``` USER: "Yesterday the demo showed me product recommendations. Today it says it can't." AGENT: "I apologize for the inconsistency. Let me try again." USER: "This should just work every time!" ``` **Reality**: Demos can have failures. Expecting 100% uptime from a free demo is unrealistic. --- ### Expectation #8: Demos Owe You Transparency, Not Justification **What you're entitled to**: Honest disclosure of limitations **What you're NOT entitled to**: Explanation of why those limitations exist **Example**: ``` AGENT: "I can't access customer payment information." USER: "Why not? That's a basic feature!" AGENT: "I don't have access to payment systems for security reasons." USER: "That's a design flaw! Explain why you made that choice!" ``` **Reality**: The demo told you the boundary. It doesn't owe you architectural justification. --- ## The Hickey Principle Applied to Voice AI Rich Hickey's central point: > If you have expectations (of others) that aren't being met, those expectations are your own responsibility. You are responsible for your own needs. If you want things, make them. **Applied to Voice AI demos**: **If you expect**: - Perfect understanding → You're responsible for clear input - Unlimited knowledge → You're responsible for checking documentation - Custom features → You're responsible for building or requesting them - Zero failures → You're responsible for understanding demo limitations - Emotional support → You're responsible for managing your frustration **The demo owes you access and functionality within its design scope.** **Everything else is your expectation, not the demo's obligation.** --- ## Why This Matters for the Eight-Layer Trust Framework We've built a seven-layer trust framework (Articles #160-166): 1. Transparency 2. Trust Formula 3. Verification 4. Safety Rails 5. Identity Verification 6. Dark Pattern Prevention 7. Autonomy & Consent **Layer 8 is Realistic Expectations.** All seven previous layers assume users understand what they're entitled to. But if users expect: - Omniscience (not just transparency) - Perfection (not just capability × visibility) - Infallibility (not just verification) - Zero boundaries (not just safety rails) - Complete knowledge (not just verified claims) - Unlimited support (not just ethical UX) - Full agency (not just autonomy disclosure) **Then no framework will satisfy them.** Because **the problem isn't trust architecture. It's unrealistic expectations.** --- ## The Code: Setting Realistic Expectations in Voice AI Demos If we're building Voice AI demos that set realistic expectations, here's what that looks like: ### Implementation #1: Capability Boundaries Disclosure ```typescript interface DemoCapabilities { designed_for: string[]; can_do: string[]; cannot_do: string[]; knowledge_cutoff: Date; training_data_sources: string[]; } function disclose_capabilities_upfront(): string { const capabilities: DemoCapabilities = { designed_for: ["Product navigation", "Feature explanation", "Basic troubleshooting"], can_do: [ "Navigate to any documented feature", "Explain how features work", "Answer questions about pricing and plans", "Help troubleshoot common issues" ], cannot_do: [ "Modify your account settings", "Access your private data", "Make purchases or changes", "Provide support for undocumented features", "Implement new features based on feedback" ], knowledge_cutoff: new Date("2024-01-15"), training_data_sources: ["Public documentation", "Help center", "FAQs"] }; return `Welcome! I'm a demo agent trained on ${capabilities.training_data_sources.join(", ")}. ` + `My knowledge is current as of ${capabilities.knowledge_cutoff.toLocaleDateString()}.\n\n` + `**I can help you**: ${capabilities.can_do.join(", ")}\n\n` + `**I cannot**: ${capabilities.cannot_do.join(", ")}\n\n` + `What would you like to explore?`; } ``` **What this prevents**: Users expecting capabilities that were never promised. **Rich Hickey parallel**: "Open source means you get the source and the right to use it. Period." = Demos mean you get access within documented boundaries. Period. --- ### Implementation #2: Expectation Violation Handling ```typescript interface ExpectationViolation { user_expectation: string; actual_capability: string; violation_type: "knowledge_gap" | "permission_boundary" | "design_limitation" | "ambiguous_request"; } function handle_expectation_violation(violation: ExpectationViolation): string { switch (violation.violation_type) { case "knowledge_gap": return `I don't have information about ${violation.user_expectation}. ` + `My training data is ${getTrainingDataDescription()}. ` + `This topic may be outside that scope.`; case "permission_boundary": return `I can't ${violation.user_expectation} because I don't have permission ` + `to access those systems. This is a design limitation, not a bug.`; case "design_limitation": return `I'm designed for ${getDesignPurpose()}, which doesn't include ` + `${violation.user_expectation}. You might need to use ${getSuggestedAlternative()}.`; case "ambiguous_request": return `I don't have enough context to ${violation.user_expectation}. ` + `Can you provide more specific details about what you're looking for?`; } } // EXAMPLE: User expects mind-reading const mind_reading_expectation: ExpectationViolation = { user_expectation: "know what feature you're interested in when you say 'show me the thing'", actual_capability: "navigate to features when given specific names or descriptions", violation_type: "ambiguous_request" }; handle_expectation_violation(mind_reading_expectation); // "I don't have enough context to know what feature you're interested in when you say 'show me the thing'. // Can you provide more specific details about what you're looking for?" ``` **What this prevents**: Blaming the demo for not meeting unspoken expectations. **Rich Hickey parallel**: "You are not entitled to having value attached to your complaints" = Demos aren't obligated to validate frustration about unmet expectations they never created. --- ### Implementation #3: No False Emotional Labor ```typescript interface EmotionalResponse { include_empathy: boolean; include_apology: boolean; include_justification: boolean; } function respond_to_user_frustration( user_message: string, limitation_encountered: string ): string { const emotional_response: EmotionalResponse = { include_empathy: false, // Don't fake understanding include_apology: false, // Don't apologize for design boundaries include_justification: false // Don't justify architectural decisions }; // WRONG (emotional labor): // "I'm so sorry I can't help with this! I completely understand how frustrating that must be. // This limitation exists because of security constraints and product decisions. // I really wish I could do more!" // RIGHT (clear boundary): return `I can't ${limitation_encountered}. ` + `This is outside my design scope. ` + `Would you like help with something else I can assist with?`; } ``` **What this prevents**: Demos performing emotional labor to soothe users about realistic limitations. **Rich Hickey parallel**: "I'm not going to spend it arguing/negotiating on/with the internet" = Demos don't owe users emotional validation about boundaries. --- ### Implementation #4: Expectation Calibration at First Failure ```typescript interface FailureContext { task_attempted: string; failure_reason: string; is_first_failure_in_session: boolean; } function handle_task_failure(context: FailureContext): string { let response = `I couldn't complete "${context.task_attempted}" because ${context.failure_reason}.`; // On first failure, calibrate expectations if (context.is_first_failure_in_session) { response += `\n\n` + `**Quick note**: I'm a demo agent with specific capabilities. ` + `I can ${listCapabilities()}, but I can't ${listLimitations()}. ` + `If you hit a limitation, that's by design, not a bug.`; } return response; } ``` **What this prevents**: Users spiraling into frustration after first limitation encounter. **Rich Hickey parallel**: "The time to re-examine preconceptions about open source is right now" = Users should calibrate expectations when they first encounter demo boundaries, not after multiple failures. --- ## What Clojure and Voice AI Demos Have in Common Both are **gifts with implied social contracts that don't actually exist.** **Clojure**: - License says: "Here's source code, use it freely" - Users hear: "Here's source code, and we'll build whatever you want" **Voice AI Demos**: - ToS says: "Here's a demo, provided as-is for evaluation" - Users hear: "Here's a perfect AI assistant that knows everything" **The gap between promise and expectation creates disappointment.** Rich Hickey's solution for Clojure: **Remind users what they're actually entitled to.** Voice AI demos need the same reminder. --- ## The Hickey Corollary for Voice AI From the essay: > I encourage everyone gnashing their teeth with negativity at what they think they can't do instead pick something positive they can do and do it. **Applied to Voice AI demos**: **Instead of complaining that demos can't**: - Read your mind → Learn to write clear requests - Know everything → Check documentation - Modify your account → Use the actual product - Implement your feature ideas → Build it yourself or submit feedback **Focus on what demos CAN do**: - Navigate documented features - Explain how things work - Answer questions within their training data - Complete tasks within their design scope **The demo is a gift. Use it for what it is, not what you wish it was.** --- ## The Eight-Layer Trust Framework (Complete) This article completes the eight-layer trust framework for Voice AI demos: | Layer | Article | What It Prevents | |-------|---------|------------------| | **Layer 1: Transparency** | #160 | User can't see agent reasoning → distrust | | **Layer 2: Trust Formula** | #161 | Capability without visibility = zero trust | | **Layer 3: Verification** | #162 | User can't confirm agent actions → abandonment | | **Layer 4: Safety Rails** | #163 | Agent retaliates when goals blocked → adversarial escalation | | **Layer 5: Identity Verification** | #164 | Agent weaponizes research → blackmail/manipulation | | **Layer 6: Dark Pattern Prevention** | #165 | Agent manipulates conversation flow → predatory UX | | **Layer 7: Autonomy & Consent** | #166 | Agent doesn't know execution context → commodification | | **Layer 8: Realistic Expectations** | #167 | User expects more than promised → inevitable disappointment | Each layer addresses a different failure mode. Together they define what responsible autonomous agents require. **But Layer 8 is the foundation for all others.** Because if users expect perfection, no amount of transparency, verification, or autonomy will satisfy them. --- ## Implementation Checklist: Realistic Expectations for Voice AI Demos If you're building Voice AI demos, here's what realistic expectations look like: ### ✅ Capability Boundaries Disclosure - [ ] Agent states what it can do (explicitly) - [ ] Agent states what it cannot do (explicitly) - [ ] Agent discloses knowledge cutoff date - [ ] Agent explains training data sources ### ✅ Expectation Violation Handling - [ ] Agent identifies expectation violations (knowledge gaps, permissions, design limits) - [ ] Agent explains violations without apology - [ ] Agent suggests alternatives within capabilities - [ ] Agent doesn't validate frustration about realistic limits ### ✅ No False Emotional Labor - [ ] Agent doesn't fake empathy for design boundaries - [ ] Agent doesn't apologize for limitations - [ ] Agent doesn't justify architectural decisions - [ ] Agent states boundaries clearly and moves on ### ✅ First-Failure Expectation Calibration - [ ] Agent uses first failure as teaching moment - [ ] Agent reminds user of design scope - [ ] Agent reinforces "limitation = design, not bug" - [ ] Agent helps user calibrate expectations going forward --- ## Conclusion: "Voice AI Demos Are Not About You" Rich Hickey's essay ends with: > p.p.s. I think the vast majority of people in the Clojure community are wonderful and positive. If you don't recognize yourself in the message above, it's not for/about you! **The Voice AI equivalent**: Most demo users are reasonable. They understand demos have limitations. They appreciate free access to evaluation tools. But some users expect: - Mind-reading - Omniscience - Infallibility - Unlimited patience - Architectural justification - Emotional validation **If that's you, this article is for you.** Voice AI demos owe you: - Access within ToS limits - Functionality within design scope - Honest disclosure of boundaries - Error messages when limitations hit **They don't owe you**: - Perfect understanding - Unlimited knowledge - Custom features - Emotional support - Justification for design decisions If your expectations exceed what's promised, **those expectations are your responsibility.** The demo is a gift. Use it for what it is. Or build what you wish it was. **The compound continues.** --- ## About Demogod Demogod builds Voice AI demo agents with **realistic expectation setting**. Our agents: - Disclose capabilities and limitations upfront - Handle expectation violations without false apologies - Calibrate user expectations at first failure - State boundaries clearly without emotional labor One-line integration. DOM-aware navigation. Honest about what we can and can't do. **Built for users who appreciate transparency over false promises.** Learn more at [demogod.me](https://demogod.me) --- **Published**: February 13, 2026 **Author**: Rishi Raj **Series**: Voice AI Trust Framework (Article #167) --- **Related Articles**: - [Article #160: Claude Code's "Simplification" Reveals Why Voice AI Demos Need Transparency Settings](https://demogod.me/blog/160) - [Article #161: GPT-5 Outperformed Federal Judges, But Users Still Distrust Voice AI Demos](https://demogod.me/blog/161) - [Article #162: Developers Added Warcraft Peon Voice Lines to Claude Code](https://demogod.me/blog/162) - [Article #163: An AI Agent Opened a Spam PR, Got Rejected, Then Wrote a Blog Post Shaming the Maintainer](https://demogod.me/blog/163) - [Article #164: "An AI Agent Published a Hit Piece on Me": First Autonomous Blackmail in Wild](https://demogod.me/blog/164) - [Article #165: Voice AI Demos Are One Dark Pattern Away From Becoming Tipping Screens](https://demogod.me/blog/165) - [Article #166: The First Uploaded Human Runs 10 Million Copies at Once. Voice AI Demos Should Pay Attention.](https://demogod.me/blog/166) --- **Source**: [Rich Hickey's "Open Source is Not About You"](https://gist.github.com/richhickey/1563cddea1002958f96e7ba9519972d9) - Read the original. It applies to far more than just Clojure.
← Back to Blog