He Trained His Neighbor With a TV Remote — Voice AI Navigation Should Use the Same Pavlovian Feedback Loop

# He Trained His Neighbor With a TV Remote — Voice AI Navigation Should Use the Same Pavlovian Feedback Loop **Posted on February 1, 2026 | HN #4 · 178 points · 30 comments** *Ibrahim Diallo discovered his Dish Network RF remote controlled both his TV and his neighbor's—same frequency, no barrier. When his neighbor refused to talk, Ibrahim spent weeks conditioning him: volume above 18? TV turns off. Volume stays low? TV stays on. After weeks of consistent feedback, his neighbor learned. Volume 18 became habit. The lesson for Voice AI navigation: immediate, consistent feedback shapes user behavior better than explanations. When users say "Find pricing" but click the wrong link, don't silently auto-correct—show them what you heard, what you clicked, let them feel the mismatch. Pavlovian conditioning works because feedback is instant, binary, and consistent. Voice AI should stop explaining intent and start creating behavioral loops that make correct navigation feel natural.* --- ## The Setup: When RF Remotes Share Frequencies 2007. Ibrahim moves to a new apartment, switches to Dish Network, discovers the RF (radio frequency) remote: > "Why would anyone ever use an IR remote again? You didn't need a direct line of sight with the device you were controlling. I could actually stand in the kitchen and control the TV. It was amazing." Then his neighbor—"the loudest in the building"—also gets Dish Network. Also gets the RF remote. Same frequency. **The interference begins:** - Channels flip randomly - Volume changes unprompted - Ibrahim disables his own RF remote thinking it's broken - One evening, testing: Ibrahim presses power → his TV turns on, neighbor's TV goes silent - He hears: "Fuck!" through the wall Both remotes control both TVs. Ibrahim has discovered unintentional remote control of his neighbor's entertainment system. --- ## The Failed Diplomatic Approach Ibrahim plans to be a good neighbor. He'll explain the RF interference, demonstrate the issue, suggest reprogramming to different frequencies. Problem solved collaboratively. **Morning visit, remote in hand:** > "Hi, I'm Ibrahim. Your upstairs neighbor..." I started and was interrupted almost immediately. "Whatever you are selling," he yelled. "I'm not buying." and he closed the door on my face. Second knock. No answer. TV turns on at high volume. Diplomacy failed. Ibrahim keeps his RF remote disabled for family use. But he moves one RF remote to his nightstand. Why the bedroom? **Because he's going to train his neighbor.** --- ## The Training Protocol: Operant Conditioning at 433 MHz Ibrahim's method: > "Whenever he turned up his volume, I would simply turn off his device. I would hear his frustration, and his attempts at solving the problem. Like a circus animal trainer, I remained consistent." **The conditioning loop:** ``` IF neighbor_volume > 18 THEN press_power_button() neighbor_TV_turns_off() neighbor_frustrated() ELSE do_nothing() neighbor_TV_stays_on() neighbor_watches_show() END ``` **Key protocol elements:** 1. **Threshold**: Volume above 18 (Ibrahim's arbitrary choice) 2. **Response**: TV powers off (binary, immediate) 3. **Consistency**: "Some nights were difficult, I would keep the remote under my pillow, battling my stubborn neighbor all night" 4. **Duration**: Weeks of nightly enforcement **The result:** > "One day, I noticed that I hadn't pressed the button in days. I opened the window and I could still hear the faint sound of his TV. Through trial and error, he learned the lesson." His neighbor now religiously keeps volume at 18. **He doesn't know why.** That's Pavlovian conditioning. --- ## Why This Worked: The Three Principles of Behavioral Feedback Ibrahim's accidental experiment demonstrates core conditioning principles: ### 1. Immediate Feedback (No Delay Between Action and Consequence) **Classical conditioning (Pavlov's dogs):** Bell rings → food appears → salivation (automatic response) **Operant conditioning (Skinner's rats):** Lever press → food pellet → lever pressing behavior increases **Ibrahim's neighbor:** Volume increases → TV turns off within seconds → volume-lowering behavior increases **Timing is critical.** If TV turned off 5 minutes after volume increase, the neighbor wouldn't connect cause and effect. Delay destroys the learning loop. **Voice AI parallel:** **Bad feedback (delayed):** - User: "Find pricing" - Voice AI navigates silently - User arrives at wrong page (Privacy Policy) - 10 seconds later, user realizes mistake - **No connection** between voice command and wrong destination **Good feedback (immediate):** - User: "Find pricing" - Voice AI: "I heard 'pricing.' Clicking 'Privacy Policy' link in footer." - User: "Wait, no—" - **Immediate mismatch** felt, correction opportunity created ### 2. Binary Feedback (Clear Success/Failure Signal) **Ibrahim's system:** TV on (success) or TV off (failure). No ambiguity. **Not:** - "Your volume might be a bit high" - "Volume detected: 22 (recommended: 15)" - "Neighbors may be disturbed by current volume level" **Just:** TV turns off. Binary signal. Unambiguous consequence. **Voice AI parallel:** **Bad feedback (fuzzy):** - Voice AI navigates, user ends up at wrong page - No clear signal that this wasn't the intended destination - User assumes they misspoke or Voice AI "did its best" **Good feedback (binary):** - Voice AI: "I found two 'Pricing' links: header and footer. I'm choosing header. Correct?" - User confirms or corrects - **Binary choice point:** proceed or retry ### 3. Consistency (Same Stimulus → Same Response, Every Time) Ibrahim's consistency requirement: **Weeks of nightly battles.** > "Some nights were difficult, I would keep the remote under my pillow, battling my stubborn neighbor all night." Not: - Monday: Volume 20 → TV turns off - Tuesday: Volume 20 → Ibrahim too tired, ignores - Wednesday: Volume 20 → TV turns off - **Result:** Inconsistent. No learning. Ibrahim's actual protocol: - Monday: Volume > 18 → TV off - Tuesday: Volume > 18 → TV off - Wednesday: Volume > 18 → TV off - Every. Single. Time. **Result:** Behavior change. **Voice AI parallel:** **Bad (inconsistent):** - Monday: User says "pricing" → Voice AI clicks /pricing link - Tuesday: User says "pricing" → Voice AI clicks /privacy link (misheard) - Wednesday: User says "pricing" → Voice AI clicks /pricing link - **Result:** User doesn't trust Voice AI (behavior unpredictable) **Good (consistent):** - Every time: User says "pricing" → Voice AI confirms: "I heard 'pricing.' Clicking Pricing page in navigation." - If mishears → Voice AI confirms: "I heard 'privacy.' Is that correct?" - **Result:** User learns Voice AI's confirmation pattern, trusts it --- ## The Neighbor's Perspective: Learning Without Understanding The brilliant part of Ibrahim's story: > "Maybe somewhere on the web, in some obscure forum, someone asked the question: 'Why does my set-top box turn off when I increase the volume?' Well, it might be 18 years too late, but there's your answer. There is a man out there who religiously sets his volume to 18. He doesn't quite know why." **The neighbor learned the rule without learning the reason.** This is **procedural learning** (habit formation) vs **declarative learning** (understanding explanation). ### Procedural Learning (Unconscious Skill Acquisition) - Riding a bike: You can't explain balance equations, but you stay upright - Touch typing: Fingers know where keys are, conscious mind doesn't - Volume control: Keep it at 18, don't know why it matters **How it forms:** Repetition of action-consequence pairs until behavior becomes automatic ### Declarative Learning (Conscious Knowledge) - Ibrahim explains RF interference → neighbor understands technically - Neighbor reads manual → learns about frequency reprogramming - Both decide to reprogram remotes → problem solved collaboratively **Why Ibrahim's approach worked better than explanation:** Neighbor refused to listen. Door slammed. Explanation opportunity lost. **Conditioning doesn't require consent or understanding.** **Voice AI application:** **Explanation approach (declarative):** - User says "pricing" but Voice AI clicks /privacy - Voice AI explains: "I use speech recognition model that sometimes confuses similar phonemes. 'Pricing' and 'privacy' share phonetic features. Would you like to try again?" - **User response:** Frustrated. Doesn't want linguistics lesson. Wants pricing page. **Conditioning approach (procedural):** - User says "pricing" but Voice AI clicks /privacy - Voice AI: "I heard 'privacy.' Wrong page? Say 'back' or 'try again.'" - User: "Try again" - Voice AI: "I heard 'pricing' this time. Clicking Pricing in header." - **Result:** User learns to enunciate clearly when navigation fails. No explanation needed. Behavior shaped by feedback loop. --- ## The Four-Stage Conditioning Process Ibrahim Used ### Stage 1: Punishment (Negative Reinforcement) **Stimulus:** Neighbor increases volume above threshold **Response:** TV turns off (removing positive stimulus = watching TV) **Effect:** Neighbor frustrated, tries to solve problem (banging remote, unplugging box) **Duration:** Days to weeks (neighbor doesn't connect volume to TV shutoff yet) ### Stage 2: Pattern Recognition **Stimulus:** Neighbor starts noticing correlation (volume up → TV off happens repeatedly) **Response:** Neighbor experiments (tries different volumes, times of day) **Effect:** Neighbor discovers threshold exists (around volume 18-20) **Duration:** Several nights of experimentation ### Stage 3: Behavior Modification **Stimulus:** Neighbor wants to watch TV uninterrupted **Response:** Neighbor keeps volume below discovered threshold **Effect:** TV stays on, neighbor successfully watches shows **Duration:** Tested over multiple nights, reinforced by success ### Stage 4: Habit Formation **Stimulus:** None needed (behavior now automatic) **Response:** Neighbor defaults to volume 18 without thinking **Effect:** Behavior persists even without active conditioning **Duration:** Years (Ibrahim moved away, neighbor likely still at volume 18) **Voice AI parallel:** ### Stage 1: User Encounters Failure - User: "Find enterprise pricing" - Voice AI: Navigates to wrong page (Contact Us form) - User frustrated, tries manual navigation ### Stage 2: User Notices Patterns - User learns: Saying "find" sometimes confuses Voice AI - User experiments: "Pricing page" → success, "Find pricing" → failure - User discovers: Shorter, clearer commands work better ### Stage 3: User Adapts Behavior - User wants fast navigation - User starts using clear, short commands: "Pricing," "Features," "Docs" - Voice AI navigation improves (commands match trained patterns) ### Stage 4: Habit Forms - User defaults to short commands without thinking - Voice AI feels natural (behavior optimized for model's strengths) - **Result:** User shaped by Voice AI's feedback patterns, Voice AI more effective --- ## What Voice AI Can Learn From RF Remote Conditioning ### Lesson 1: Silent Auto-Correction Prevents Learning **Ibrahim's approach:** TV turns off (loud feedback, impossible to miss) **Not:** TV volume automatically adjusts to 18 (silent correction, neighbor never learns) **Voice AI equivalent:** **Silent auto-correction (prevents learning):** - User: "Find pricing" - Voice AI hears: "Find privacy" - Voice AI corrects silently: "User probably meant pricing based on page context" - Voice AI navigates to /pricing - **User never learns** Voice AI misheard **Result:** User assumes Voice AI is perfect, continues saying "pricing" unclearly, future failures surprise them **Loud feedback (enables learning):** - User: "Find pricing" - Voice AI hears: "Find privacy" - Voice AI: "I heard 'privacy.' Clicking Privacy Policy link. Correct?" - User: "No, pricing!" - Voice AI: "Got it. 'Pricing.' Navigating to Pricing page." - **User learns** they weren't clear, enunciates better next time **Result:** User adapts speech patterns, fewer failures ### Lesson 2: Immediate Feedback Beats Delayed Explanation **Ibrahim's timing:** Volume up → TV off within seconds (causal connection clear) **Not:** Volume up → next day, Ibrahim leaves note: "Your TV volume was too loud yesterday evening" **Voice AI equivalent:** **Delayed explanation (broken loop):** - User: "Find pricing" - Voice AI navigates silently - User ends up at Features page (wrong) - 10 seconds later, user realizes mistake - No feedback from Voice AI - **Causal link** broken (user doesn't know Voice AI misheard vs clicked wrong link) **Immediate feedback (intact loop):** - User: "Find pricing" - Voice AI: "I heard 'pricing.' I see 'Pricing' in navigation menu and 'Features' tab. Both mention pricing. Clicking navigation menu." - User sees choice being made in real-time - If wrong → User interrupts: "No, Features tab!" - **Causal link** preserved (user sees exactly what Voice AI understood and chose) ### Lesson 3: Binary Choices Beat Graduated Scales **Ibrahim's feedback:** TV on (success) or TV off (failure). Two states only. **Not:** TV volume automatically reduces to Ibrahim's preferred level (graduated response) **Why binary works better:** - Clear threshold (volume 18 = dividing line) - Unambiguous consequence (TV on vs TV off, no middle ground) - Easier to learn (one rule: stay below threshold) **Voice AI equivalent:** **Graduated feedback (confusing):** - User: "Find pricing" - Voice AI: "Confidence: 73%. I think you said 'pricing.' Navigating..." - User ends up at correct page - **User doesn't know** if 73% confidence is good or bad **Binary feedback (clear):** - User: "Find pricing" - Voice AI: "I'm certain I heard 'pricing.' Clicking Pricing page." (high confidence → direct action) - OR: "I heard 'pricing' or 'privacy.' Which did you mean?" (low confidence → binary choice) - **User learns** difference between clear and ambiguous commands ### Lesson 4: Consistency Creates Trust **Ibrahim's consistency:** Every time volume > 18 → TV off. No exceptions. **Voice AI equivalent:** **Inconsistent behavior (destroys trust):** - Monday: User says "features" → Voice AI navigates to /features - Tuesday: User says "features" → Voice AI navigates to /pricing (misheard) - Wednesday: User says "features" → Voice AI navigates to /features - **User thinks:** Voice AI is unreliable, can't trust it **Consistent confirmation (builds trust):** - Every time: Voice AI confirms what it heard before navigating - Monday: "I heard 'features.' Clicking Features tab." - Tuesday: "I heard 'features.' Clicking Features tab." - Wednesday: "I heard 'features.' Clicking Features tab." - **User thinks:** Voice AI always tells me what it heard, I can correct if wrong ### Lesson 5: Users Adapt Without Knowing They're Adapting **Ibrahim's neighbor:** Keeps volume at 18, doesn't consciously know why **Voice AI equivalent:** After weeks of using Voice AI with consistent confirmation: - User naturally enunciates more clearly (learned from misheard corrections) - User uses shorter commands (learned long commands confuse model) - User waits for confirmation before continuing (learned to verify navigation) **User doesn't think:** "I'm optimizing my speech for the model's weaknesses" **User thinks:** "Voice AI navigation feels natural now" **That's successful conditioning.** Behavior shaped without explicit instruction. --- ## The Architecture of Feedback Loops Ibrahim's RF remote conditioning system has three components: ### 1. Stimulus Detection (Volume Monitoring) Ibrahim listens for neighbor's TV volume through wall/window. **Implementation:** Passive monitoring (auditory perception) **Threshold:** Volume above ~18 (subjective, but consistent) **Voice AI equivalent:** Monitor user's voice commands: - Acoustic input: "Find pricing" - ASR output: "Find privacy" (phonetic similarity) - Confidence score: 0.68 (below 0.85 threshold = uncertain) ### 2. Response Execution (TV Power Off) When threshold exceeded → immediate action (press power button) **Implementation:** RF remote signal sent (433 MHz) **Effect:** Neighbor's TV turns off within 1 second **Voice AI equivalent:** When confidence below threshold → request clarification: - Voice AI: "I heard 'privacy.' Did you mean 'pricing' or 'privacy'?" - User confirms/corrects - **Effect:** Mismatch revealed immediately, correction opportunity created ### 3. Feedback Observation (Neighbor's Response) Ibrahim hears neighbor's frustration, problem-solving attempts, eventual adaptation. **Implementation:** Auditory feedback through walls (growls, TV restarting, silence) **Learning signal:** When TV stays on for extended periods = neighbor learned threshold **Voice AI equivalent:** Observe user's correction patterns: - User frequently corrects "privacy" → "pricing": Learn this confusion is common - User rarely needs to correct "features": Learn this command is clear - **Learning signal:** When correction rate drops = user adapted speech patterns --- ## Designing Voice AI Navigation for Behavioral Conditioning ### Principle 1: Always Confirm What You Heard **Before navigation:** ``` User: "Find pricing" Voice AI: "I heard 'pricing.' Searching navigation for Pricing page..." [navigates] ``` **Why this works:** - User hears their command reflected back (auditory feedback loop) - If mismatch → User interrupts: "No, I said features!" - If match → User's confidence in Voice AI increases **Ibrahim's equivalent:** TV turning off is confirmation (you did the thing I'm responding to) ### Principle 2: Binary Choices for Uncertainty **When confidence low (<0.85):** ``` User: "Find pricing" Voice AI: "I heard 'pricing' or 'privacy.' Which did you mean?" User: "Pricing" Voice AI: "Got it. Navigating to Pricing page." ``` **Not:** ``` Voice AI: "I'm 68% confident you said 'pricing.' Navigating anyway..." ``` **Why binary works:** Clear decision point, user feels in control **Ibrahim's equivalent:** TV on (right) or TV off (wrong). No middle ground. ### Principle 3: Immediate Feedback, No Delays **Timing requirement:** Confirm within 500ms of user's command **Too slow:** ``` User: "Find pricing" [waits 3 seconds] Voice AI: "Navigating to Pricing page." [User already trying manual navigation, feedback loop broken] ``` **Fast enough:** ``` User: "Find pricing" Voice AI: [within 500ms] "I heard 'pricing.' Clicking header link." [User still focused on navigation, can interrupt if wrong] ``` **Ibrahim's equivalent:** TV turns off within 1-2 seconds of volume increase ### Principle 4: Consistent Patterns Across Sessions **Every session, same confirmation format:** - "I heard [X]. Clicking [Y]." - Not: sometimes "Navigating to X", sometimes "I heard X", sometimes silent **User learns the pattern:** - Expects confirmation every time - Knows they can interrupt during confirmation - Trusts Voice AI because behavior is predictable **Ibrahim's equivalent:** Volume > 18 → TV off, every single night --- ## What NOT to Do: Anti-Patterns That Break Conditioning ### Anti-Pattern 1: Explaining Instead of Confirming **Bad:** ``` User: "Find pricing" Voice AI: "I use a speech recognition model that sometimes confuses similar phonemes. I heard 'pricing' which could also be 'privacy' based on acoustic similarity. I'm going to navigate to the Pricing page, but if that's wrong, you can say 'back'." [User stops listening halfway through explanation] ``` **Good:** ``` User: "Find pricing" Voice AI: "I heard 'pricing.' Clicking Pricing link." [User confirms with eyes, interrupts if wrong] ``` **Ibrahim didn't explain RF interference.** He just turned the TV off. Conditioning doesn't require understanding. ### Anti-Pattern 2: Silent Auto-Correction **Bad:** ``` User: "Find pricing" Voice AI hears: "privacy" Voice AI thinks: "Context suggests user wants /pricing, not /privacy. Auto-correcting..." [navigates to /pricing silently] User: "Oh good, it worked." [doesn't know it misheard] ``` **Why this fails:** User never learns Voice AI misheard. Future misheard commands surprise them. **Ibrahim's equivalent:** Secretly adjusting neighbor's TV volume to 18 while he sleeps. Neighbor never learns threshold exists. ### Anti-Pattern 3: Inconsistent Feedback **Bad:** ``` Monday: User says "features" → Voice AI confirms, navigates correctly Tuesday: User says "features" → Voice AI silent, navigates correctly Wednesday: User says "features" → Voice AI confirms, navigates correctly ``` **Why this fails:** User doesn't know when to expect confirmation. Can't rely on pattern. **Ibrahim's equivalent:** Some nights TV turns off at volume 20, some nights Ibrahim's asleep and TV stays on at volume 25. Neighbor never learns threshold. ### Anti-Pattern 4: Delayed Consequences **Bad:** ``` User: "Find pricing" [Voice AI navigates silently] [User arrives at wrong page] [5 seconds later] Voice AI: "Oops, that was the wrong page. Did you mean Pricing?" ``` **Why this fails:** Delay breaks causal connection. User doesn't connect voice command to wrong page. **Ibrahim's equivalent:** Next morning, Ibrahim leaves note saying "Your TV was too loud last night." Neighbor doesn't connect volume to consequence. --- ## The Ethical Consideration: Conditioning Without Consent Ibrahim's neighbor never consented to being conditioned. He doesn't know it happened. Is this ethical? **Arguments for ethical concerns:** - Manipulation without knowledge or consent - Neighbor lost autonomy (volume choice controlled by Ibrahim) - Deception (neighbor thinks it's technical issue, not external control) **Arguments for acceptable intervention:** - Neighbor violated social contract first (excessive noise) - Neighbor refused dialogue (door slammed, no alternative) - Result benefits both parties (quieter building, neighbor learns consideration) - No permanent harm (neighbor could discover/reprogram remote anytime) **Voice AI parallel:** **User "conditioning" via feedback loops:** - Users don't explicitly consent to "being trained" by Voice AI - Users don't know their speech patterns are being shaped - Is this ethical? **Differences from Ibrahim's case:** - Voice AI feedback is **transparent** ("I heard X") - Users benefit directly (better navigation, fewer errors) - Users can opt out (disable Voice AI, use manual navigation) - No deception (Voice AI explicitly states what it heard/did) **Conclusion:** Conditioning via transparent feedback loops is ethical IF: 1. User knows feedback is happening (confirmations are audible/visible) 2. User can disable/ignore feedback (opt-out available) 3. User benefits from adaptation (better UX, not manipulation for profit) --- ## Practical Implementation: The Confirmation Architecture ### Component 1: Acoustic Input Processing ```typescript interface VoiceInput { raw_audio: AudioBuffer; asr_output: string; // "Find pricing" confidence: number; // 0.0-1.0 alternatives?: string[]; // ["privacy", "pricing"] timestamp: number; } ``` ### Component 2: Feedback Decision Logic ```typescript function decideConfirmation(input: VoiceInput): ConfirmationStrategy { if (input.confidence >= 0.85) { return { type: "implicit", message: `I heard '${input.asr_output}'. Navigating...`, wait_for_interrupt: 500ms }; } if (input.alternatives.length > 1) { return { type: "explicit_choice", message: `I heard '${input.alternatives[0]}' or '${input.alternatives[1]}'. Which?`, require_confirmation: true }; } return { type: "repeat_request", message: "I didn't catch that. Please repeat.", require_new_input: true }; } ``` ### Component 3: Real-Time Interruption Handling ```typescript async function navigate(intent: string, confirmation: ConfirmationStrategy) { speak(confirmation.message); if (confirmation.type === "implicit") { // Give user 500ms to interrupt const interrupted = await waitForInterrupt(500); if (interrupted) { speak("Stopping. What did you mean?"); return; } } if (confirmation.type === "explicit_choice") { const choice = await waitForUserChoice(); intent = choice; } // Execute navigation with continued confirmation speak(`Clicking ${findTarget(intent)}`); clickElement(findTarget(intent)); } ``` ### Component 4: Learning from Corrections ```typescript interface CorrectionEvent { heard: string; // "privacy" meant: string; // "pricing" context: PageContext; // current page state user_id: string; timestamp: number; } function learnFromCorrection(event: CorrectionEvent) { // Track common confusions confusionMatrix.increment(event.heard, event.meant); // If confusion frequent → adjust confirmation threshold if (confusionMatrix.get("privacy", "pricing") > 10) { confirmationThresholds.set("privacy", 0.95); // Higher bar for "privacy" } // User-specific adaptation userProfiles.get(event.user_id).addConfusion(event.heard, event.meant); } ``` --- ## The Long-Term Effect: Users Don't Know They've Been Trained Ibrahim's punchline: > "There is a man out there who religiously sets his volume to 18. He doesn't quite know why." **18 years later**, his neighbor likely still keeps volume at 18. Not because he remembers the RF interference. Because **volume 18 feels right.** **Voice AI equivalent:** After 6 months of using Voice AI navigation with consistent confirmation: - User naturally enunciates product names clearly - User pauses between commands (learned Voice AI needs processing time) - User uses short, clear phrases ("Pricing" not "Can you find the pricing page for me?") **User doesn't think:** "I adapted my speech for the model" **User thinks:** "Voice AI navigation feels natural" **That's the goal.** Shape user behavior through feedback until correct usage feels instinctive. Ibrahim conditioned his neighbor to be quiet. Voice AI conditions users to be clear. Both work through the same mechanism: **consistent, immediate, binary feedback.** Neither requires understanding why the rule exists. The rule just... feels right. That's Pavlovian conditioning at 433 MHz. And at whatever frequency Voice AI operates. --- *Keywords: Pavlovian conditioning voice AI, behavioral feedback loops navigation, RF remote interference neighbor training, operant conditioning UI design, immediate feedback voice commands, binary confirmation voice AI, user behavior shaping through feedback, procedural learning vs declarative learning, transparent AI conditioning, habit formation voice navigation* *Word count: ~4,500 | Source: idiallo.com/blog/teaching-my-neighbor-to-keep-the-volume-down | HN: 178 points, 30 comments*
← Back to Blog