"AI Makes You Boring" - Why Original Thinking Requires the Work You're Offloading to LLMs

# "AI Makes You Boring" - Why Original Thinking Requires the Work You're Offloading to LLMs **Meta Description**: Viktor Löfgren argues AI makes people boring by offloading the deep immersion that generates original ideas. Connects to cognitive debt, privacy tradeoffs, and why individuals accept costs organizations reject. Framework synthesis. --- Yesterday we completed a ten-article validation arc (#179-188) documenting five systematic patterns: transparency violations, capability improvements that don't fix trust, productivity claims requiring un-scalable tradeoffs, IP violations detected faster but infrastructure unchanged, and verification tools that can't verify themselves. Today, Viktor Löfgren (creator of Marginalia Search) publishes a short, devastating argument: **AI makes you boring.** Not "AI helps you be productive." Not "AI augments your thinking." **AI makes you boring.** And the mechanism explains why the ten-article framework validation holds across individual and organizational deployment: **Original ideas come from immersing in problems for long periods. AI offloads that immersion. You get shallow surface-level ideas instead.** This isn't a bug report. It's the fundamental tradeoff Danny McCafferty accepted (Article #184: privacy cost for productivity gain) and Benjamin Breen rejected (Article #185: "The work is, itself, the point"). And it's why organizations report zero productivity impact (Article #182) while individuals claim AI "fixed" their productivity. **The work that makes you interesting is the work you're offloading.** ## The Show HN Quality Collapse Löfgren's argument starts with an observed pattern on Hacker News: > "I think the vibe coded Show HN projects are overall pretty boring. They generally don't have a lot of work put into them, and as a result, the author (pilot?) hasn't generally thought too much about the problem space, and so there isn't really much of a discussion to be had." **Pre-AI Show HN**: You talked to someone who had thought about a problem way longer than you had. Real opportunity to learn, get entirely different perspective. **Post-AI Show HN**: Boring people with boring projects who don't have anything interesting to say about programming. **The shift isn't just volume increase (more low-quality submissions). It's a fundamental change in what authors bring to the discussion.** Before AI: Authors had immersed in the problem. They'd tried multiple approaches, hit dead ends, discovered unexpected constraints, developed novel insights. After AI: Authors prompted a model, got output, shipped it. No immersion, no dead ends, no constraints discovered, no insights developed. **They didn't do the work that makes the conversation interesting.** ## The Fundamental Argument: AI Makes People Boring Löfgren's core claim: > "AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they're very good at treating your inputs to the discussion as amazing genius level insights." **This may be a feature if exploring unfamiliar topics. It's a fatal flaw if doing original work.** Blog posts, product design, programming discussions—all require original thinking. AI offloading doesn't augment that thinking. **It replaces it with shallow surface-level output.** And here's the critical insight most people miss: > "Some will argue that this is why you need a human in the loop to steer the work and do the high level thinking. That premise is fundamentally flawed. **Original ideas are the result of the very work you're offloading on LLMs.** Having humans in the loop doesn't make the AI think more like people, it makes the human thought more like AI output." **The human-in-the-loop doesn't fix the problem. It creates the opposite problem: Human thinking converges toward AI output.** ## How Original Ideas Actually Work Löfgren connects to a [previous post on Feynman](https://www.marginalia.nu/log/a_108_feynman_revisited/): > "The way human beings tend to have original ideas is to **immerse in a problem for a long period of time**, which is something that flat out doesn't happen when LLMs do the thinking. You get shallow, surface-level ideas instead." **Original thinking requires:** 1. **Deep immersion** in the problem space 2. **Extended time** wrestling with constraints 3. **Articulation work** (writing, teaching, explaining) **AI offloading eliminates all three:** 1. No immersion (prompt → output) 2. No extended time (instant generation) 3. No articulation work (AI does the writing) **From the essay:** > "Ideas are then further refined when you try to articulate them. This is why we make students write essays. It's also why we make professors teach undergraduates." **Prompting an AI model is not articulating an idea.** You get the output, but in terms of ideation the output is discardable. **It's the work that matters.** And then the devastating analogy: > "You don't get build muscle using an excavator to lift weights. You don't produce interesting thoughts using a GPU to think." ## Connection to Article #185: "The Work Is, Itself, the Point" Let me connect Löfgren's argument to Benjamin Breen's cognitive debt essay (Article #185): **Breen** (refuses AI for writing): > "I miss the obsessive flow you get from deep immersion in writing a book. Such work has none of the dopamine spiking, slot machine-like addictiveness of Claude Code." **Löfgren** (same refusal pattern): > "Original ideas are the result of the very work you're offloading on LLMs." **Both identify the same fundamental tradeoff**: AI productivity gains come from offloading the cognitive work that develops original thinking. **Breen's resolution**: "The work is, itself, the point" (Essay writing without AI) **Löfgren's resolution**: Immerse in problems for long periods, articulate ideas through writing/teaching (Original thinking requires the work) **Same pattern. Same rejection of the productivity-for-cognition tradeoff.** ## Connection to Article #184: Danny McCafferty's Privacy Tradeoff Danny McCafferty (Article #184) claims AI "fixed" his productivity: - Saves 20 minutes/day on meeting notes with Granola - Ships side projects in hours instead of weekends using Claude - Email gets triaged automatically **Privacy cost**: Feeds meetings, code, emails, research to AI tools he can't own or audit. **His acknowledgment**: "I willingly feed more context into AI tools each day than Google ever passively collected from me." **Löfgren's argument adds the cognitive dimension:** Danny gains productivity by offloading: - Meeting summaries (no longer articulates insights from conversations) - Code generation (no longer wrestles with implementation constraints) - Email triage (no longer processes information flow himself) **Privacy cost**: Danny's data exposed to third parties **Cognitive cost**: Danny's thinking converges toward AI output (not original ideas from deep immersion) **Danny accepted both costs because individual productivity gain > individual privacy + cognitive cost.** **Organizations reject both costs because organizational productivity gain < organizational privacy + cognitive risk.** **Löfgren's essay explains the cognitive half of why individuals accept tradeoffs organizations reject.** ## The Complete Individual vs Organizational Tradeoff Pattern Let me synthesize Articles #184, #185, and #189: ### Individual Productivity Gains (Article #184: Danny) **What individuals accept:** - **Privacy tradeoff**: Feed all context to AI (meetings, code, emails) - **Cognitive tradeoff**: Offload thinking to AI (summarization, generation, triage) **What individuals get:** - 20-40 minutes/day reclaimed - Projects shipped faster - Reduced friction across tasks **Why individuals accept**: Personal productivity gain > personal privacy + cognitive cost (individual choice, individual downside risk) ### Individual Cognitive Rejection (Article #185: Breen & Article #189: Löfgren) **What some individuals refuse:** - **Privacy tradeoff**: Some accept (Breen uses Claude Code, just not for writing) - **Cognitive tradeoff**: Refuse (Löfgren: "AI makes you boring," Breen: "The work is, itself, the point") **What these individuals preserve:** - Original thinking from deep immersion - Cognitive development from articulation work - Expertise pipeline from "doing the thinking" **Why they refuse**: For fields requiring original thought (writing, research, design), the cognitive cost > productivity gain ### Organizational Deployment Failure (Article #182) **What organizations can't accept:** - **Privacy tradeoff**: Can't feed confidential data to systems they can't audit (client agreements, compliance, competitive intelligence) - **Cognitive tradeoff**: Can't offload organizational knowledge development to AI without hollowing expertise **What organizations measure:** - Uncertain productivity gains (Danny's 20 min/day × 10,000 employees = assumption, not measurement) - Certain privacy risk (documented exposure) - Uncertain cognitive risk (expertise pipeline degradation over time) **Why organizations don't deploy**: Uncertain gains < certain privacy risk + uncertain but existential cognitive risk **Result**: 90% of firms report zero productivity impact (Article #182) ## The Show HN Pattern Is the Organizational Pattern at Individual Scale Löfgren's observation about Show HN quality collapse: **Pre-AI**: Authors immersed in problems, developed original insights, brought interesting perspectives to discussions **Post-AI**: Authors prompted AI, got shallow output, have nothing interesting to say because they didn't do the cognitive work **This is the same pattern as organizational AI deployment failure, but visible at individual project level.** Organizations that deploy AI without addressing the privacy/cognitive tradeoff get: - Content generation without expertise development - Output without original thinking - Productivity metrics without organizational learning **They become boring organizations.** Just like AI-assisted Show HN projects become boring projects—not because AI can't generate output, but because the authors didn't do the work that makes the output interesting. ## Why "Human in the Loop" Makes It Worse Most devastating insight from Löfgren: > "Some will argue that this is why you need a human in the loop to steer the work and do the high level thinking. That premise is fundamentally flawed. Original ideas are the result of the very work you're offloading on LLMs. **Having humans in the loop doesn't make the AI think more like people, it makes the human thought more like AI output.**" **Organizations deploying AI with "human oversight" think they're getting:** - AI efficiency + human judgment - Automated generation + expert review - Productivity gains + quality control **What they're actually getting:** - Human thinking that converges toward AI output - Expertise atrophy (review is easier than creation, develops less capability) - Quality degradation over time (reviewers lose ability to generate original alternatives) **Article #188 documented this at the verification layer**: LLM-as-a-Judge doesn't improve guardrail accuracy. It inflates confidence (4.81 vs human 3.86) while being less accurate. **Löfgren documents this at the ideation layer**: Human-in-the-loop doesn't preserve original thinking. It makes human thought more like AI output. **Same pattern. Different layer. Human oversight doesn't fix the fundamental tradeoff.** ## The Excavator Analogy Explains Everything Löfgren's closing analogy: > "You don't get build muscle using an excavator to lift weights. You don't produce interesting thoughts using a GPU to think." **This captures the complete framework validation:** **Article #184** (Danny's productivity): Using excavator to lift weights = fast, efficient, gets the job done **Cost**: Don't build muscle (privacy + cognitive tradeoff) **Article #185** (Breen's rejection): Recognizes that for some work, **the muscle-building IS the point**, not just moving the weight **Decision**: Don't use excavator for work where capability development matters **Article #182** (Organizational deployment): Organizations can't use excavators at scale because: - Liability (excavator operator training, insurance, compliance) - Capability degradation (entire workforce loses muscle if excavators do all lifting) - Uncertain ROI (faster lifting doesn't always = organizational value) **Article #189** (Löfgren): **You can't build muscle using excavators, and you can't produce original thoughts using GPUs** **Implication**: Any deployment pattern that offloads cognitive work eliminates the capability it claims to augment ## Connection to Article #188: Guardrails Can't Verify Themselves Yesterday's article (Roya Pakzad's research) showed AI guardrails exhibit 36-53% score discrepancies based on policy language and hallucinate safety disclaimers that don't exist. **The pattern**: Verification tools offload the verification work, converge toward AI-output-style verification, can't verify themselves. **Löfgren's argument applies to AI safety infrastructure:** > "Having humans in the loop doesn't make the AI think more like people, it makes the human thought more like AI output." **Having guardrails in the loop doesn't make AI safer. It makes safety verification converge toward AI-output-style verification (hallucinated disclaimers, false confidence, language-dependent inconsistencies).** **Same mechanism:** - Offload cognitive work → Capability atrophy - Offload verification work → Verification capability atrophy - Offload safety work → Safety capability atrophy **You can't verify AI safety using AI safety tools that can't verify themselves.** **You can't produce original thoughts using AI thinking tools that produce shallow surface-level ideas.** **Both fail for the same reason: The work you're offloading is the capability you need to preserve.** ## The Complete Eleven-Article Framework Validation Let me extend the ten-article validation to include today's findings: **Article #179** (Feb 17): Anthropic removes transparency → Community ships "un-dumb" tools (72h) **Article #180** (Feb 17): Economists claim jobs safe → Data shows entry-level -35% **Article #181** (Feb 17): Sonnet 4.6 capability upgrade → Trust violations unaddressed **Article #182** (Feb 18): $250B investment → 6,000 CEOs report zero productivity impact **Article #183** (Feb 18): Microsoft diagram plagiarism → "Continvoucly morged" (8h meme) **Article #184** (Feb 18): Individual productivity → Privacy tradeoffs don't scale organizationally **Article #185** (Feb 18): Cognitive debt → "The work is, itself, the point" **Article #186** (Feb 18): Microsoft piracy tutorial → DMCA deletion (3h), infrastructure unchanged **Article #187** (Feb 19): Anthropic bans OAuth → Transparency paywall ($20→$80-$155) **Article #188** (Feb 19): Guardrails show 36-53% discrepancies → Can't verify themselves **Article #189** (Feb 19): AI makes you boring → Offloading cognitive work eliminates original thinking **Complete synthesis across eleven articles:** 1. **Transparency violations** (#176, #179, #187): Vendors escalate control instead of restoring trust 2. **Capability improvements** (#181): Don't address trust violations (trust debt 30x faster) 3. **Productivity claims** (#182, #184, #185, #189): Require privacy/cognitive tradeoffs that don't scale 4. **IP violations** (#183, #186): Detected faster (8h→3h), infrastructure unchanged 5. **Verification infrastructure** (#188): Can't verify itself, compounds Layer 4 violations 6. **Cognitive infrastructure** (#189): Offloading thinking eliminates capability to think originally **The pattern**: **You can't build muscle using excavators, and you can't preserve cognitive capability while offloading cognitive work to GPUs.** Individuals accept privacy/cognitive tradeoffs (Articles #184, #185) for personal productivity gains (Danny) or reject them to preserve original thinking (Breen, Löfgren). Organizations reject both tradeoffs (Article #182: 90% report zero impact) because: - Privacy cost doesn't scale (confidential data exposure) - Cognitive cost doesn't scale (organizational expertise atrophy) - Verification cost doesn't scale (Article #188: tools can't verify themselves) **And Löfgren's essay explains why "human in the loop" doesn't fix it: Offloading work makes human thought converge toward AI output, not the reverse.** ## The Demogod Difference: Narrow Tasks Preserve Cognitive Development This is why Demogod's approach matters: **Current AI productivity tools (Danny's stack, Show HN projects):** - Offload cognitive work (summarization, code generation, ideation) - Require broad context access (meetings, codebase, research) - Promise productivity gains - **Cost**: Eliminate deep immersion that generates original thinking (Löfgren), cognitive debt compounds (Breen), privacy exposure (Article #184), verification tools can't verify themselves (Article #188) **Demogod's voice-controlled demo agents:** - **Narrow task domain** (guided website tours, not open-ended generation) - **Bounded cognitive offloading** (navigation assistance, not creative thinking) - **Preserve user cognitive development** (users still learn product, form opinions, make decisions) - **Observable verification** (action succeeded/failed, not subjective quality) **Löfgren's argument**: You don't build muscle using excavators to lift weights **Demogod's design**: Use escalators for routine navigation, preserve muscle-building for valuable work **Danny's pattern**: Offload everything (meetings, code, email) → Gain productivity, lose cognitive capability **Demogod's pattern**: Assist bounded tasks (demo navigation) → Gain efficiency, preserve cognitive capability where it matters (product evaluation, decision-making, expertise development) **When your productivity gains come from offloading the cognitive work that generates original thinking, you're not augmenting capability—you're replacing it.** **When your productivity gains come from reducing friction on routine tasks while preserving cognitive engagement on valuable work, you augment without replacement.** ## The Verdict Viktor Löfgren's essay "AI makes you boring" identifies the mechanism behind Articles #184-185's individual tradeoffs and Article #182's organizational deployment failure: **Original ideas come from immersing in problems for long periods. AI offloads that immersion. You get shallow surface-level ideas instead.** The Show HN quality collapse (pre-AI: authors with deep expertise and original insights; post-AI: boring projects with nothing interesting to say) is the individual-scale version of organizational AI deployment failure (Article #182: 90% report zero impact). **Both fail for the same reason: Offloading cognitive work eliminates the capability to develop original thinking.** And "human in the loop" doesn't fix it. **It makes human thought converge toward AI output** (Löfgren), same as Article #188 showed guardrails convergning toward AI-output-style verification (36-53% discrepancies, hallucinated disclaimers). **Articles #179-188** documented trust violations, capability improvements that don't fix trust, privacy/cognitive tradeoffs that don't scale, IP violations detected faster but infrastructure unchanged, and verification tools that can't verify themselves. **Article #189** documents the cognitive mechanism: **You can't build muscle using excavators, and you can't produce original thoughts using GPUs to think.** Individuals who accept privacy/cognitive tradeoffs (Danny) get productivity gains at the cost of cognitive capability. Individuals who reject those tradeoffs (Breen, Löfgren) preserve original thinking by doing the work. Organizations reject both tradeoffs (Article #182) because privacy doesn't scale, cognitive capability atrophy is existential, and verification tools can't verify themselves (Article #188). **The work you're offloading is the capability you need to preserve.** That's not a bug. It's the fundamental tradeoff AI productivity tools require. **And until someone builds AI tools that reduce friction without replacing cognitive development, organizations will keep doing what 6,000 CEOs reported: Deploy cautiously, measure risk, get zero productivity impact.** **Because when productivity gains require offloading the cognitive work that makes you interesting, the rational organizational response is: Don't deploy.** --- **About Demogod**: We build AI-powered demo agents for websites—voice-controlled guidance that assists bounded navigation tasks without offloading the cognitive work that preserves expertise development, original thinking, and product evaluation capability. Narrow domain, preserve cognitive engagement. Learn more at [demogod.me](https://demogod.me). **Framework Updates**: This article documents the cognitive mechanism behind Articles #184-185 (individual privacy/cognitive tradeoffs) and #182 (organizational deployment failure). Offloading cognitive work to AI eliminates deep immersion that generates original thinking. "Human in the loop" makes human thought converge toward AI output, not the reverse. Eleven-article validation complete (#179-189): The work you're offloading is the capability you need to preserve.
← Back to Blog