Why "Nobody Knows How the Whole System Works" Is Even More True for Voice AI Demos
# Why "Nobody Knows How the Whole System Works" Is Even More True for Voice AI Demos
**Meta Description:** Lorin Hochstein: nobody knows how whole systems work. Telephony engineer "George" answered endless edge cases. Voice AI demos face the same complexity - sales expertise is your "George."
---
## The Telephony Engineer Named George
From [Lorin Hochstein's post](https://surfingcomplexity.blog/2026/02/08/nobody-knows-how-the-whole-system-works/) (45 points on HN, 3 hours old, 33 comments):
**1977. BNR (Bell-Northern Research). First all-digital telephone switch (DMS-100).**
A team of computer scientists thought they could apply their knowledge to build "a nicely layered set of abstractions in an elegant framework."
**They failed.**
From Peter Ludemann's comment:
> "We thought that we could apply our Computer Science knowledge to build a nicely layered set of abstractions in an elegant framework. And we failed in our attempt to encapsulate a century of analog telephony engineering."
**So they hired George.**
> "We were loaned a veteran telephony engineer, and all day programmers went to his office and said 'George, we have a thingamabob connected via SF signalling to a whatchamacallit, and the thingamabob does 3 off-hooks and an on-hook and the whatchamacallit responds … and then … and then … and what happens next?' and George would think for a bit or consult from his bookshelf of binders and give the answer, which was duly incorporated into the code that was becoming **more byzantine every day**."
**The system worked. But nobody understood the whole thing.**
**This is the fundamental nature of complex systems:**
Nobody knows how the whole system works.
**And Voice AI demos are complex systems.**
---
## Lorin Hochstein's Central Question
Lorin Hochstein, author of *Learning from Incidents* and Netflix's former senior software engineer, synthesizes four perspectives on system complexity:
### 1. Simon Wardley (LinkedIn post):
> "It's dangerous to build things where we don't understand the underlying mechanism of how they actually work. This is precisely why **magic** is used as an epithet in our industry."
**Example:** Ruby on Rails (canonical "magic" framework that obscures mechanisms for ease of use).
### 2. Adam Jacob (in response):
> "AI is changing the way that normal software development work gets done. It's a new capability that has proven itself to be so useful that it clearly isn't going away. Yes, it represents a significant shift in how we build software, it moves us further away from how the underlying stuff actually works, but **the benefits exceed the risks**."
### 3. Bruce Perens:
> "Modern CPU architectures and operating systems contain significant complexity, and many software developers are blissfully unaware of how these things really work. Yes, they have mental models of how the system below them works, but **those mental models are incorrect in fundamental ways**."
**Translation:** The scenario Wardley fears has already happened.
### 4. MIT Professor Louis Bucciarelli (1994):
> "Does anyone know how their telephone works? Does he know about the heuristics used to achieve optimum routing for long-distance calls? Does he know about the intricacies of the algorithms used for echo and noise suppression? Does he know how a signal is transmitted to and retrieved from a satellite in orbit? Does he know how AT&T, MCI, and the local phone companies are able to use the same network simultaneously?"
**Answer:** No. Nobody knows.
**Bucciarelli's conclusion:**
> "Systems like telephony are so inherently complex, have been built on top of so many different layers in so many different places, that **no one person can ever actually understand how the whole thing works**. This is the fundamental nature of complex technologies: our knowledge of these systems will always be partial, at best."
---
## The Brendan Gregg Interview Technique
Lorin Hochstein on how Brendan Gregg (performance engineer, author of *Systems Performance*) conducted technical interviews at Netflix:
> "He was interested in identifying the **limits of a candidate's knowledge**, and how they reacted when they reached that limit. So, he'd keep asking deeper questions about their area of knowledge until they reached a point where they didn't know anymore. And then he'd see whether they would actually admit 'I don't know the answer to that', or whether they would bluff."
**Why this worked:**
> "He knew that **nobody understood the system all of the way down**."
**The interview wasn't testing total knowledge.**
**It was testing honesty about partial knowledge.**
---
## The "What Happens When You Type a URL?" Question
Classic technical interview question: *"What happens when you type a URL into your browser's address bar and hit enter?"*
**You can answer at many levels:**
- HTTP (application layer)
- DNS (name resolution)
- TCP (transport layer)
- IP (network layer)
- Ethernet (data link layer)
- Wi-Fi modulation (physical layer)
**But does anybody know ALL the levels?**
Lorin Hochstein:
> "Do you know about the interrupts that fire inside of your operating system when you actually strike the enter key? Do you know which modulation scheme is being used by the 802.11ax Wi-Fi protocol in your laptop right now? Could you explain the difference between quadrature amplitude modulation (QAM) and quadrature phase shift keying (QPSK), and could you determine which one your laptop is currently using?"
**Nobody knows the whole stack.**
**And that's okay.**
**Because complex systems are inherently unknowable in totality.**
---
## Voice AI Demos Are Complex Systems
Voice AI demos face the same complexity problem as telephony, but **worse**.
**Why worse?**
### 1. Telephony: Technical Complexity
**Edge cases in telephony:**
- SF signaling protocols
- Off-hook/on-hook sequences
- Echo suppression algorithms
- Routing heuristics
- Legacy equipment (1894 Stromberg-Carlson switchboard)
**George knew these edge cases** (or could look them up in his binders).
**Result:** System worked, even if nobody understood the whole thing.
### 2. Voice AI Demos: Technical + Human Complexity
**Edge cases in demos:**
- Buyer psychology (technical vs business vs end-user)
- Objection handling (price, complexity, competition, timing)
- Context reading (bored, confused, excited, skeptical)
- Narrative adaptation (deep-dive vs high-level)
- Trust building (rapport, credibility, shared language)
**Sales engineers know these edge cases** (from experience, not binders).
**But AI automation pushes us further from this knowledge.**
---
## The "George Problem" for Voice AI Demos
**1977 BNR scenario:**
```
Computer scientists: "We'll build elegant abstractions!"
→ Fail (can't encapsulate century of telephony)
→ Hire George (veteran engineer with binders)
→ Programmers ask George endless questions
→ System becomes byzantine
→ But it works
```
**2026 Voice AI scenario (if done wrong):**
```
AI builders: "We'll train on product docs and FAQs!"
→ Fail (can't encapsulate decade of sales expertise)
→ Ignore "George" (sales engineers with experience)
→ AI answers generic questions
→ System becomes shallow
→ And it fails
```
**2026 Voice AI scenario (if done right):**
```
AI builders: "We'll encode sales engineer expertise!"
→ Recognize complexity (buyer psychology, objection handling)
→ Hire "George" (sales engineers as knowledge source)
→ Encode George's knowledge in agent behavior
→ System becomes nuanced
→ And it works
```
**The difference:**
**Telephony automation:** Keep George's knowledge accessible (consultations, binders)
**Voice AI automation:** Encode George's knowledge upfront (agent behavior, context rules)
---
## Why "Nobody Knows the Whole System" Applies More to Demos
Lorin Hochstein's question:
> "Does anyone know how their telephone works?"
**Answer for telephony:** No. But the system works anyway (because George's knowledge was accessible).
**Answer for Voice AI demos:** No. And the system fails (unless George's knowledge is encoded).
**Why the difference?**
### Telephony: Asynchronous Knowledge Access
**When edge case appears:**
1. Developer encounters weird signaling behavior
2. Developer goes to George's office
3. George consults binders
4. Developer gets answer
5. Developer codes solution
**Timeline:** Minutes to hours.
**Acceptable delay:** Yes (development cycle allows it).
### Voice AI Demos: Synchronous Knowledge Requirement
**When edge case appears:**
1. Prospect asks objection question
2. AI needs to respond **now**
3. No time to "consult George"
4. AI gives generic answer or no answer
5. Prospect loses trust
**Timeline:** Seconds.
**Acceptable delay:** No (demo cycle demands instant response).
**The implication:**
**Telephony automation could afford partial knowledge** (George accessible when needed).
**Voice AI automation cannot afford partial knowledge** (George must be encoded before runtime).
---
## The Four Layers of "George's Knowledge"
What did George know that the computer scientists didn't?
### Layer 1: Explicit Knowledge (What's in the Binders)
**Telephony:**
- SF signaling specs
- Off-hook/on-hook timing diagrams
- Echo suppression formulas
- Routing tables
**Voice AI demos:**
- Product feature specs
- API documentation
- Pricing tiers
- Integration requirements
**Status:** Easily encoded (documentation → training data).
### Layer 2: Implicit Knowledge (What George Remembers)
**Telephony:**
- "That Stromberg-Carlson switchboard has a quirk where..."
- "When you see 3 off-hooks, it usually means..."
- "SF signaling behaves differently on legacy equipment because..."
**Voice AI demos:**
- "When CTO asks about security, they're really worried about..."
- "If they mention 'complexity,' it usually means they fear..."
- "When buyer goes silent, it's because..."
**Status:** Hard to encode (tacit knowledge, not documented).
### Layer 3: Contextual Knowledge (What George Infers)
**Telephony:**
- "You said thingamabob connects to whatchamacallit... that means you're dealing with legacy trunk lines, so you need to account for..."
- "If it's SF signaling, then the timing constraints are..."
**Voice AI demos:**
- "They asked about pricing early... that means they're budget-constrained, so emphasize ROI..."
- "They mentioned 'our current solution,' which means they're comparing, so show differentiation..."
**Status:** Very hard to encode (requires inference from context).
### Layer 4: Adaptive Knowledge (What George Learns)
**Telephony:**
- "Last time we had this combination of equipment, the solution was X, but that didn't work, so we tried Y..."
- "I've seen this pattern before, and the usual fixes are..."
**Voice AI demos:**
- "Last time prospect said 'too complicated,' showing Quick Start mode worked, but this time it didn't, so try..."
- "I've noticed technical buyers respond better to architecture diagrams, but this buyer seems skeptical of diagrams, so switch to..."
**Status:** Nearly impossible to encode (requires learning from failure).
---
## Why AI Makes "Nobody Knows" Worse for Demos
Lorin Hochstein (quoting Adam Jacob):
> "AI is changing the way that normal software development work gets done. It moves us further away from how the underlying stuff actually works, but **the benefits exceed the risks**."
**This is true for coding.**
**Is this true for demos?**
### Coding: Benefits Exceed Risks
**Benefits:**
- Faster code generation (10x)
- More features shipped (higher velocity)
- Reduced boilerplate (less repetitive work)
**Risks:**
- Less understanding of underlying mechanisms (developers rely on AI)
- Tech debt accumulation (verbose, duplicated patterns)
- Debugging difficulty (generated code is harder to trace)
**Net result:** Benefits still exceed risks (for now).
### Demos: Do Benefits Exceed Risks?
**Benefits (generic AI chatbot):**
- 24/7 availability (no scheduling)
- Unlimited scalability (no human bottleneck)
- Instant response (no wait time)
**Risks (generic AI chatbot):**
- No buyer psychology (can't read the room)
- No objection handling (generic responses)
- No trust building (no rapport)
- No context adaptation (technical vs business)
**Net result:** Risks may exceed benefits (if George's knowledge not encoded).
---
## The "George Test" for Voice AI Demos
Brendan Gregg's interview technique:
> "Keep asking deeper questions until they reach the limits of their knowledge, then see if they admit 'I don't know' or bluff."
**Apply this to Voice AI demos:**
### Generic AI Chatbot (Fails the George Test)
**Level 1 question:** "How does billing work?"
**AI:** "Billing is in the Billing section. You can view invoices, set up payment methods, and manage subscriptions."
**Level 2 question:** "What happens if my payment fails?"
**AI:** "If your payment fails, you'll receive an email notification and can update your payment method."
**Level 3 question:** "Will I lose access immediately, or is there a grace period?"
**AI:** "There is a grace period. You can update your payment method during this time."
**Level 4 question:** "How long is the grace period?"
**AI:** "The grace period duration depends on your plan. Please contact support for details."
**Result:** AI bluffed through Levels 1-3, then admitted ignorance at Level 4. **Failed George Test.**
### Sales-Engineer-Guided AI (Passes the George Test)
**Level 1 question:** "How does billing work?"
**AI:** "Great question! Most customers start with monthly billing, then switch to annual once they see ROI. Our billing dashboard shows usage, invoices, and payment history. Let me show you..."
**Level 2 question:** "What happens if my payment fails?"
**AI:** "We send an email notification immediately, and you have a 7-day grace period to update your payment method. During those 7 days, your account remains active. After 7 days, access is suspended (but data is retained for 30 days). You can reactivate anytime by updating payment."
**Level 3 question:** "Will my users see any interruption during the grace period?"
**AI:** "No. The 7-day grace period ensures zero user-facing interruption. We designed it this way because payment failures are often card expiration or bank issues, not intentional non-payment. Your users won't even know there was an issue if you fix it within 7 days."
**Level 4 question:** "What if I need more than 7 days?"
**AI:** "I can connect you with our billing team to discuss extended grace periods or payment plans. Would you like me to set that up?"
**Result:** AI answered Levels 1-4 with context-aware responses, then escalated to human when appropriate. **Passed George Test.**
---
## The DMS-100 Lesson for Voice AI
Peter Ludemann (1977 BNR team member):
> "The DMS-100 operating system and programming languages, however, were elegant, as was the overall architecture of the system. (For example, we were using relational databases before Oracle; and OOP before C++.) **But the signalling was a hodgepodge of special cases**."
**Translation:**
**Core infrastructure:** Elegant (OS, languages, databases, architecture)
**Domain complexity:** Byzantine (signaling, edge cases, George's knowledge)
**Voice AI equivalent:**
**Core infrastructure:** Elegant (LLM, voice synthesis, DOM parsing, API integration)
**Domain complexity:** Byzantine (buyer psychology, objection handling, context reading)
**The mistake:**
**1977 BNR team:** "We'll apply Computer Science principles to telephony!"
**Result:** Failed until George's domain expertise was incorporated.
**2026 Voice AI teams:** "We'll apply AI principles to sales demos!"
**Result:** Will fail until sales engineer's domain expertise is incorporated.
---
## Why "Partial Knowledge" Isn't Good Enough for Demos
Lorin Hochstein (quoting Bucciarelli):
> "Complex technologies: our knowledge of these systems will always be partial, at best. Yes, AI will make this situation worse. But it's a situation that we've been in for a long time."
**This is true.**
**But partial knowledge has different consequences in different domains.**
### Telephony: Partial Knowledge OK
**Why partial knowledge worked:**
- George accessible when needed (asynchronous consultation)
- Edge cases rare (most calls follow standard protocols)
- Failure mode recoverable (call drops, user redials)
- User expectations low (1977 technology, frequent glitches normal)
**Result:** System works despite nobody knowing the whole thing.
### Voice AI Demos: Partial Knowledge NOT OK
**Why partial knowledge fails:**
- George not accessible during demo (synchronous interaction)
- Edge cases common (every buyer has unique context)
- Failure mode terminal (trust lost, prospect moves to competitor)
- User expectations high (2026 technology, seamless experience expected)
**Result:** System fails because nobody encoded the whole thing upfront.
---
## The "Browser URL" Question for Voice AI Demos
Lorin Hochstein:
> "What happens when you type a URL into your browser's address bar and hit enter? You can talk about what happens at all sorts of different levels (e.g., HTTP, DNS, TCP, IP, …). But does anybody really understand **all** of the levels?"
**Voice AI equivalent:**
**What happens when a prospect asks, "How much does this cost?"**
**You can answer at all sorts of different levels:**
### Level 1: Literal Answer
"Our pricing starts at $49/month for the Basic plan, $99/month for Pro, and custom pricing for Enterprise."
### Level 2: Context-Aware Answer
"Great question! Most companies your size start with our Pro plan ($99/month), which includes unlimited users and API access. Does that fit your budget range?"
### Level 3: Psychology-Aware Answer
"I noticed you asked about pricing early in the demo—does that mean you have a specific budget in mind? I want to make sure I show you features that deliver ROI within your constraints."
### Level 4: Objection-Aware Answer
"When prospects ask about pricing this early, it's often because they're worried about cost overruns or hidden fees. We designed our pricing to be transparent—no per-seat fees, no usage overages, no surprise charges. Does that help address your concern?"
### Level 5: Adaptive Answer
"Before I show you pricing, let me ask: are you comparing us to [Competitor X]? I want to make sure you're comparing apples-to-apples, because their pricing model is very different from ours."
**Generic AI chatbot:** Level 1 only.
**Sales-engineer-guided AI:** Levels 1-5, chosen based on context.
**The difference:**
**Browser URL question:** Nobody needs to understand all levels (each layer abstracts the one below).
**Demo pricing question:** AI needs to understand all levels (buyer psychology doesn't abstract away).
---
## Why Voice AI Can't Afford the "Magic" Abstraction
Lorin Hochstein (quoting Simon Wardley):
> "It's dangerous to build things where we don't understand the underlying mechanism of how they actually work. This is precisely why **magic** is used as an epithet in our industry."
**Ruby on Rails:** Canonical "magic" framework.
**Why Rails "magic" works (mostly):**
- Abstracts database queries (ActiveRecord)
- Hides boilerplate (convention over configuration)
- Speeds development (scaffolding, generators)
**When Rails "magic" fails:**
- Performance issues (N+1 queries)
- Edge cases (complex joins)
- Debugging nightmares (can't trace abstraction)
**Solution:** Experienced Rails developers know when to break the abstraction (write raw SQL, optimize queries).
**Voice AI "magic" abstraction:**
**What it tries to hide:**
- Buyer psychology complexity
- Objection handling nuance
- Context reading requirements
- Trust building mechanisms
**When Voice AI "magic" fails:**
- Generic responses (no buyer context)
- Missed objections (no pattern recognition)
- Lost trust (no rapport building)
**Problem:** No way to "break the abstraction" mid-demo.
**In Rails:** Developer can write raw SQL when ORM fails.
**In Voice AI:** Prospect can't "call George" when AI fails.
---
## The Four Perspectives, Applied to Voice AI
Lorin Hochstein synthesized four perspectives. Let's apply them to Voice AI demos:
### 1. Simon Wardley's Perspective (Risk)
**Wardley's fear:** Building without understanding mechanisms = dangerous.
**Voice AI application:**
**Generic AI chatbot:** Built without understanding buyer psychology mechanisms.
**Risk:** Chatbot can't handle objections, can't read context, can't build trust.
**Result:** Wardley's fear realized (demos fail because mechanisms unknown).
**Sales-engineer-guided AI:** Built with sales engineer's mechanism knowledge encoded.
**Risk mitigation:** Agent behavior encodes buyer psychology, objection patterns, context rules.
**Result:** Wardley's fear avoided (mechanisms known, just automated).
### 2. Adam Jacob's Perspective (Trade-off)
**Jacob's position:** AI changes work, moves us from mechanisms, but benefits exceed risks.
**Voice AI application:**
**Generic AI chatbot:** Benefits (scalability, 24/7) vs Risks (shallow, no trust)
**Verdict:** Risks may exceed benefits (lost deals more costly than gained scalability).
**Sales-engineer-guided AI:** Benefits (scalability, 24/7, expertise encoded) vs Risks (still not human)
**Verdict:** Benefits exceed risks (expertise preserved, scalability gained).
### 3. Bruce Perens's Perspective (Already Happened)
**Perens's observation:** CPU complexity already happened. Developers have incorrect mental models.
**Voice AI application:**
**Sales complexity already happened:**
- Buyer psychology (emotional + rational decision-making)
- Objection handling (price, complexity, timing, competition)
- Context reading (bored, excited, skeptical)
- Trust building (rapport, credibility)
**AI builders have incorrect mental models:**
- "Answering questions correctly = good demo" (wrong: trust matters more than correctness)
- "More features shown = better demo" (wrong: relevance matters more than quantity)
- "Fast response = good UX" (wrong: contextual response matters more than speed)
**Result:** Generic chatbots fail because mental models incorrect.
### 4. Bucciarelli's Perspective (Inherent Unknowability)
**Bucciarelli's conclusion:** Complex systems = nobody knows the whole thing.
**Voice AI application:**
**Nobody knows "the whole demo":**
- Product team: Knows features, doesn't know buyer psychology
- Sales team: Knows objection handling, doesn't know technical architecture
- Engineering team: Knows API, doesn't know sales narrative
- Marketing team: Knows positioning, doesn't know live demo adaptation
**But sales engineers bridge the gaps:**
- Understand product (enough to demo)
- Understand buyers (enough to adapt)
- Understand objections (enough to handle)
- Understand context (enough to read the room)
**Result:** Sales engineers are the "George" of demos—partial knowledge, but the right partial knowledge.
---
## The "Byzantine Signaling" Equivalent for Voice AI
Peter Ludemann:
> "The code was becoming **more byzantine every day**."
**Why Byzantine?**
Because telephony signaling had a century of legacy:
- Analog protocols
- Digital protocols
- Hybrid protocols
- Vendor-specific quirks
- Regional variations
- Backward compatibility requirements
**Voice AI equivalent:**
Demo conversations are Byzantine because sales has decades of legacy:
- Industry-specific language (healthcare vs finance vs tech)
- Role-specific priorities (CTO vs CFO vs end-user)
- Competitive positioning (against Competitor A vs B vs C)
- Timing-specific concerns (quarter-end urgency vs exploratory)
- Company-specific context (startup vs enterprise vs scale-up)
- Backward compatibility with existing stack
**The pattern:**
**Telephony:** Century of analog → digital created Byzantine signaling
**Demos:** Decades of sales evolution created Byzantine buyer psychology
**The mistake:**
**1977 BNR team:** "We'll ignore the Byzantine stuff and build elegant abstractions!"
**Result:** Failed. Had to incorporate George's Byzantine knowledge.
**2026 Voice AI teams:** "We'll ignore the Byzantine stuff and train on product docs!"
**Result:** Fails. Must incorporate sales engineer's Byzantine knowledge.
---
## Why the "Elegant Architecture" Isn't Enough
Peter Ludemann:
> "The DMS-100 operating system and programming languages, however, were elegant, as was the overall architecture of the system. (For example, we were using relational databases before Oracle; and OOP before C++.) **But the signalling was a hodgepodge of special cases**."
**Translation:**
**Elegant infrastructure** + **Byzantine domain logic** = **System that works, but nobody fully understands**
**Voice AI equivalent:**
**Elegant infrastructure:**
- LLM (Claude, GPT)
- Voice synthesis (natural-sounding)
- DOM parsing (understands page structure)
- API integration (smooth data flow)
**Byzantine domain logic:**
- Buyer psychology (emotional + rational)
- Objection handling (price, complexity, timing)
- Context reading (industry, role, competitive)
- Trust building (rapport, credibility)
**Result:**
**Generic AI chatbot:** Elegant infrastructure + Missing Byzantine domain logic = **System that fails**
**Sales-engineer-guided AI:** Elegant infrastructure + Encoded Byzantine domain logic = **System that works**
---
## The "1894 Stromberg-Carlson Switchboard" Moment
Peter Ludemann:
> "At one point, I was asked to interface an 1894 Stromberg-Carlson switchboard to our fancy digital switch (telephone companies were loathe to retire equipment that had been fully depreciated)."
**The challenge:** Make 1970s digital technology work with 1894 analog equipment.
**Why hard:** 1894 equipment has quirks, undocumented behaviors, no specification.
**Solution:** George knew (or could figure out) the quirks.
**Voice AI equivalent:**
**The challenge:** Make 2026 AI demo work with "legacy" buyer expectations.
**What are "legacy buyer expectations"?**
- Face-to-face rapport (pre-Zoom era)
- Custom demos (pre-SaaS era)
- Technical deep-dives (pre-"move fast" era)
- Long sales cycles (pre-PLG era)
**Why hard:** These expectations are implicit, undocumented, context-dependent.
**Solution:** Sales engineers know (from experience) how to bridge old and new.
**Example:**
**Old-school buyer (enterprise CTO):** Expects 60-minute custom demo, technical deep-dive, architectural diagrams, security audit.
**New-school buyer (startup founder):** Expects 5-minute self-service demo, one-click signup, "just works" experience.
**Generic AI chatbot:** Gives same demo to both (fails both).
**Sales-engineer-guided AI:** Adapts demo length, depth, language, and format based on buyer signals (succeeds with both).
---
## The "Consult George's Binders" Equivalent for Voice AI
BNR programmers:
> "George would think for a bit or consult from his bookshelf of binders and give the answer."
**George's binders = External knowledge repository**
**For telephony:**
- SF signaling specs
- Timing diagrams
- Equipment compatibility matrices
- Regional protocol variations
**For Voice AI demos:**
- Sales playbooks (objection handling scripts)
- Competitor battle cards (differentiation talking points)
- Customer case studies (proof points for specific industries)
- Feature prioritization guides (what to show when)
**The difference:**
**Telephony automation:** George consults binders during development (asynchronous).
**Voice AI automation:** Agent must "pre-consult" binders before demo (synchronous).
**Why?**
Because telephony edge cases appear during **development** (developer has time to ask George).
Demo edge cases appear during **runtime** (prospect has no time to wait for human consultation).
---
## Conclusion: "Nobody Knows" Is OK If "Somebody Encoded"
Lorin Hochstein's synthesis:
**Wardley:** Dangerous to build without understanding.
**Jacob:** AI changes work, but benefits exceed risks.
**Perens:** Complexity already happened (CPU, OS).
**Bucciarelli:** Nobody knows the whole system (inherent to complex technologies).
**All four are correct.**
**For Voice AI demos:**
### Wardley Is Right: It's Dangerous
**Generic AI chatbot:** Built without understanding buyer psychology mechanisms.
**Result:** Dangerous (demos fail, trust lost, deals lost).
### Jacob Is Right: Benefits Can Exceed Risks
**Sales-engineer-guided AI:** Benefits (scalability, expertise encoded) exceed risks (not human).
**Result:** Trade-off worth it (if expertise preserved).
### Perens Is Right: Already Happened
**Sales complexity:** Already Byzantine (decades of buyer psychology evolution).
**Result:** Can't avoid complexity. Can only encode it or fail to encode it.
### Bucciarelli Is Right: Nobody Knows
**Complex systems:** Nobody understands the whole thing.
**Result:** "Whole" understanding not required. "Right partial" understanding required.
---
## The George Test for Voice AI
**1977 BNR lesson:**
Nobody knows how the whole telephony system works.
**But George knows the right parts.**
**And George's knowledge, encoded in code, made the system work.**
**2026 Voice AI lesson:**
Nobody knows how the whole demo system works.
**But sales engineers know the right parts.**
**And sales engineer's knowledge, encoded in agent behavior, makes the system work.**
**The test:**
**When edge case appears, can the system access the knowledge it needs?**
**Telephony (1977):** Yes. Programmer asks George, George consults binders, knowledge accessed.
**Voice AI (wrong approach):** No. Chatbot doesn't know, prospect gets generic answer, knowledge not accessed.
**Voice AI (right approach):** Yes. Agent accesses encoded knowledge, prospect gets contextual answer, knowledge accessed.
---
## The Difference Between "Nobody Knows" and "Nobody Encoded"
**"Nobody Knows" = OK** (for complex systems)
**"Nobody Encoded" = FAIL** (for Voice AI demos)
**Telephony:**
- Nobody knows the whole system (✓ OK, inherent to complexity)
- George knows the right parts (✓ OK, accessible when needed)
- George's knowledge encoded in code (✓ OK, system works)
**Voice AI (wrong approach):**
- Nobody knows the whole system (✓ OK, inherent to complexity)
- Sales engineers know the right parts (✓ OK, expertise exists)
- Sales engineer's knowledge NOT encoded (✗ FAIL, system fails)
**Voice AI (right approach):**
- Nobody knows the whole system (✓ OK, inherent to complexity)
- Sales engineers know the right parts (✓ OK, expertise exists)
- Sales engineer's knowledge IS encoded (✓ OK, system works)
**The lesson:**
**"Nobody knows" is fine.**
**"Nobody encoded" is fatal.**
---
## References
- Lorin Hochstein. (2026). [Nobody knows how the whole system works](https://surfingcomplexity.blog/2026/02/08/nobody-knows-how-the-whole-system-works/)
- Hacker News. (2026). [Nobody knows how the whole system works discussion](https://news.ycombinator.com/item?id=46941882)
- Simon Wardley. (2026). LinkedIn post on building without understanding mechanisms
- Adam Jacob. (2026). LinkedIn post on AI changing software development
- Bruce Perens. (2026). LinkedIn post on underestimating existing complexity
- Louis Bucciarelli. (1994). *Designing Engineers*. MIT Press
- Peter Ludemann. (2026). Comment on DMS-100 development experience
---
**About Demogod:** Voice AI demo agents that encode sales engineer expertise, not replace it. Your "George" becomes the agent's intelligence. Byzantine knowledge preserved, complex systems that work. [Learn more →](https://demogod.me)
← Back to Blog
DEMOGOD