Doing the Thing Is Doing the Thing: Why Software Productivity Crisis Maps to Voice AI Navigation [HN #11 · 365pts]
# "Doing the Thing Is Doing the Thing": Why Software's Biggest Productivity Crisis Maps Perfectly to Voice AI Navigation
**Posted on January 28, 2026 | HN #11 · 365 points · 122 comments**
*A viral software manifesto about action versus planning paralysis reveals the exact operating philosophy that makes Voice AI navigation radical — and why every developer who's been "busy" without shipping should pay attention.*
---
## The Post That Stopped 365 Hackers in Their Tracks
On January 25, 2026, Prakhar Gupta published a blog post on softwaredesign.ing that became one of the most upvoted pieces on Hacker News that week. The title was deceptively simple: "Doing the thing is doing the thing."
The entire post is a rhythmic litany of negations. Each paragraph strips away one more excuse, one more comfortable substitute for actual work:
> Thinking about doing the thing is not doing the thing.
> Dreaming about doing the thing is not doing the thing.
> Visualizing success from doing the thing is not doing the thing.
> Waiting to feel ready to do the thing is not doing the thing.
And it continues. Planning. Buying tools. Reorganizing your workspace. Watching tutorials. Reading threads. Announcing you'll start. Listening to podcasts. Every single activity that feels productive but isn't. Every surrogate that lets you tell yourself you're making progress when you're standing still.
Then, at the end, the pivot:
> Failing while doing the thing is doing the thing.
> Doing it badly is doing the thing.
> Doing it timidly is doing the thing.
> Doing a small part of the thing is doing the thing.
365 points. 122 comments. Developers across the world nodded in recognition. Not because the insight was novel — we've all read about action bias, about shipping fast, about MVP philosophy. But because the *format* hit differently. It weaponized simplicity. It stripped away every rationalization layer until only the bare truth remained.
And here's the thing that's been nagging at me since I read it: **this manifesto is not just a productivity hack for individual developers. It is, almost accidentally, a precise description of how Voice AI navigation must work — and does work — to succeed in the real world.**
---
## The Planning Trap in Software Development
Before we get to Voice AI, let's sit with what Prakhar's post is actually diagnosing. Because it's not a new problem. It's the oldest problem in software.
Software development has always had a peculiar relationship with action versus preparation. On one hand, we celebrate shipping fast. "Move fast and break things" was a cultural axiom for an entire generation of startups. On the other hand, we've built an entire ecosystem of preparation tools — architecture diagrams, requirements documents, sprint planning, design reviews, technical RFCs — that often consume more energy than the actual building.
The planning trap works like this: **every hour spent planning feels like productive work because it generates artifacts that look like progress.** A detailed architecture document. A well-organized Jira board. A comprehensive requirements specification. These are all real deliverables. They sit in shared drives. They get reviewed. They get commented on. They tick boxes.
But none of them are the thing.
The thing is the working software. The thing is the feature that users can touch. The thing is the code that runs in production and solves a real problem for a real person.
Prakhar's post hit 365 points because every developer has been there. You've spent three days designing the perfect database schema when you could have shipped a working prototype in six hours using a spreadsheet. You've written twelve pages of API documentation before writing a single endpoint. You've debated ORM choices in a Slack channel for a week while the feature your users actually need sits unbuilt.
**The planning trap is real. It's pervasive. And it masquerades as professionalism.**
But here's what makes this particularly interesting in 2026: the planning trap hasn't gotten smaller with AI. In some ways, it's gotten larger. Because now we have even more sophisticated tools for planning — AI-generated architecture reviews, automated code analysis, intelligent requirements synthesis — that let us feel even more productive while still not doing the thing.
---
## What "Doing the Thing" Actually Requires
Let's unpack what it means to actually "do the thing" in software, because Prakhar's post diagnoses the problem brilliantly but doesn't prescribe the solution in detail. The solution has several uncomfortable components.
### 1. Starting Before You're Ready
The most psychologically uncomfortable requirement. Your mental model of the solution is incomplete. Your understanding of the requirements is fuzzy. Your technical approach hasn't been vetted by the team. And you start anyway.
This isn't recklessness. It's recognition that **perfect understanding is asymptotic** — you can approach it but never reach it through planning alone. The only way to fill the gaps in your understanding is to encounter them. And the only way to encounter them is to start building.
### 2. Shipping Broken Things
Not broken in the sense of "this crashes on launch." Broken in the sense of "this is incomplete, this doesn't handle edge cases, this has rough edges." The first version of almost everything is ugly. It's functional but inelegant. It works but not the way you'd design it if you had infinite time.
Prakhar says it directly: **"Doing it badly is doing the thing."**
This is the hardest part for experienced developers. We've seen enough bad code to know exactly how our first attempt will fall short. We have a vivid picture of the gap between "what I'm about to ship" and "what this should eventually be." Facing that gap and shipping anyway is the core skill.
### 3. Iterating from Imperfect Starting Points
The corollary of shipping broken things is that your work is never done in one pass. Every feature goes through versions. Every system evolves. The first implementation teaches you things that make the second implementation better, which teaches you things that make the third implementation better.
This is why "doing a small part of the thing" counts as doing the thing. You don't need to solve everything at once. You need to solve something, learn from it, and solve the next thing.
### 4. Tolerating Failure as Information
When you do the thing and it fails — users don't engage, the approach doesn't scale, the architecture breaks under load — that failure is the most valuable information you'll ever get. More valuable than any planning exercise. More valuable than any requirements document.
Prakhar: **"Failing while doing the thing is doing the thing."**
Failure is not the opposite of progress. It is progress. It's the fastest possible way to learn which of your assumptions were wrong.
---
## The Uncomfortable Parallel to AI Navigation
Now here's where it gets interesting. Because every single principle above — start before ready, ship broken things, iterate from imperfect starting points, treat failure as information — is exactly what Voice AI navigation does when it guides a user through a website.
And the principles that Prakhar lists as "not doing the thing" — planning perfect routes, waiting to understand everything, explaining the thing before doing it, reorganizing your tools before acting — these are precisely the failure modes that AI navigation systems fall into when they're built wrong.
Let me be specific.
### Planning the Perfect Route Is Not Navigating
Traditional AI navigation approaches often start by building a complete mental model of a website before attempting any navigation. Parse the entire DOM. Map all pages. Understand every link relationship. Build a graph of the site structure. Only then, with complete information, attempt to guide the user.
This sounds thorough. It sounds professional. It sounds like "doing the research first."
But it's exactly what Prakhar is warning against.
**Planning the perfect system for the thing is not doing the thing.**
A website is a living system. The DOM changes between page loads. New features appear. Old ones get removed. Seasonal content shifts layouts. A/B tests alter navigation patterns. By the time your AI has built its perfect mental map, the map is already partially wrong.
Voice AI sidesteps this trap entirely. It doesn't build a complete model first. It starts navigating. It observes the current state of the page — right now, in this moment — and takes the next logical step toward the user's goal. If that step leads somewhere unexpected, it adjusts. The navigation happens in real-time, based on what's actually in front of it, not on a pre-computed blueprint.
### Waiting to Feel Ready Is Not Navigating
There's a related failure mode in AI navigation: the system waits until it has "high confidence" in its understanding of the page before acting. It sees a button but isn't sure what it does. It sees a menu but doesn't know which option leads where. So it pauses. Analyzes. Sometimes asks the user for clarification that the user didn't request and doesn't need.
From the user's perspective, this feels like hesitation. Like the system is uncertain. Like it's not ready.
Voice AI doesn't wait to feel ready. It reads the page, forms a hypothesis about the best path forward, and moves. If the hypothesis is wrong, the user sees what happened and can redirect. The navigation is a conversation, not a monologue. And conversations move at the speed of interaction, not the speed of analysis.
**Waiting to feel ready to do the thing is not doing the thing.**
### Explaining Before Doing Is Not Navigating
Another common pattern in AI assistants: before taking any action, they explain what they're about to do. "I'll click on the 'Settings' menu, then look for the 'Privacy' option, and from there navigate to the data management section." A full narration of the planned route before a single step is taken.
This feels helpful. It feels transparent. It gives the user a sense of what's coming.
But in practice, most users don't want a tour guide who describes every exhibit before you enter the museum. They want someone who walks beside them and points things out as they happen. The explanation should come with the action, not before it.
Voice AI navigates first, describes second. It moves the cursor, clicks the element, reveals the result — and offers context about what just happened and why, not a pre-flight briefing about what might happen.
**Explaining the thing to others is not doing the thing.**
### Failing While Navigating Is Navigating
This is the most important parallel, and the one that separates good AI navigation from bad.
Every navigation attempt will sometimes go wrong. The user asked to find something that doesn't exist in the obvious place. The page layout shifted. A modal appeared unexpectedly. The user's mental model of what they wanted doesn't match the site's structure.
A navigation system that treats failure as fatal — that stops, reports an error, asks for a restart — is a system that will never be trusted for complex tasks. Because complex tasks have failures baked in. They're not edge cases. They're the normal texture of real-world navigation.
Voice AI treats navigation failures as information. A wrong turn reveals something about the site structure. An unexpected modal reveals something about the page's state. A dead end reveals something about what the user actually wanted versus what they said they wanted. Each failure makes the next attempt better.
**Failing while doing the thing is doing the thing.**
---
## The Deeper Philosophy: Action Creates Understanding
There's a philosophical principle buried in Prakhar's post that goes beyond productivity advice. It's about the relationship between action and knowledge.
The planning mindset assumes a linear progression: **understand → plan → execute.** First you learn everything about the problem space. Then you design a solution. Then you build it. Understanding precedes action.
But Prakhar's manifesto implies the opposite: **action creates understanding.** You learn what the problem actually is by trying to solve it. You discover requirements you never thought to ask about by building something and watching how it behaves. You understand the solution space by inhabiting it, not by studying it from the outside.
This is not a new idea. It echoes decades of agile methodology, lean startup thinking, and the general philosophy of iteration. But in 2026, with AI systems becoming capable of taking autonomous actions at scale, it's more relevant than ever.
Because AI systems face the same choice that developers do: **plan thoroughly before acting, or act quickly and learn from the results.**
And the answer, as Prakhar makes viscerally clear, is the same: **doing the thing is doing the thing.**
### Why This Matters for AI Navigation Specifically
Traditional software development has the luxury of offline planning. You can sit down, think about a feature, design the database schema, write tests, review with colleagues — all before writing a single line of production code. The "doing" is separated from the "planning" by hours or days.
AI navigation has no such luxury. The user is waiting. The page is loaded. The task is clear. The system has seconds — sometimes milliseconds — to decide what to do next. There is no time for comprehensive planning. There is only time for action.
This temporal constraint forces AI navigation into the "doing the thing" paradigm whether it wants to or not. The question isn't whether to act before you have complete information. The question is how to act well with incomplete information.
And the answer Voice AI gives is: **observe what's in front of you, form the simplest hypothesis about what to do next, do it, and learn from what happens.**
This is Prakhar's manifesto, translated into navigation logic:
- "Failing while doing the thing is doing the thing" → A wrong click reveals the site structure. That's progress.
- "Doing it badly is doing the thing" → An imperfect navigation path still gets the user closer to their goal than no navigation at all.
- "Doing it timidly is doing the thing" → A cautious first step — clicking one link, observing the result — is still a step. It's still movement.
- "Doing a small part of the thing is doing the thing" → Getting the user 70% of the way to their goal is infinitely better than planning the perfect route and never starting.
---
## The 365-Point Signal
Why did this post get 365 points? Why did 122 people comment on it?
Because it articulated something that most developers feel but rarely say out loud: **we spend too much time preparing to do things and not enough time actually doing them.** And the guilt of that — the awareness that you're in the planning trap even as you're trapped in it — makes Prakhar's simple, rhythmic litany feel like permission. Permission to start badly. Permission to ship broken things. Permission to learn from failure instead of trying to prevent it.
365 points on Hacker News is a signal. It means something resonated at scale. It means hundreds of experienced developers recognized themselves in those paragraphs.
And the fact that it resonated so strongly in 2026 — in the age of AI coding assistants, automated testing, LLM-generated architecture reviews — is telling. We haven't solved the planning trap with better tools. If anything, we've made it worse. We've just given ourselves more sophisticated ways to feel productive without shipping.
---
## What Voice AI Teaches Us About Doing the Thing
Voice AI navigation is, in a sense, an existence proof for Prakhar's philosophy. It demonstrates that you can navigate complex, dynamic systems effectively without:
- Understanding the entire system first
- Planning a complete route in advance
- Waiting until you have high confidence in your approach
- Explaining your plan before executing it
You just... do it. You observe. You act. You observe again. You adjust. You continue.
And it works. Not because the system is perfect. Not because every navigation attempt is optimal. But because **doing the thing — even imperfectly, even with frequent course corrections — is infinitely more productive than planning the perfect thing and never starting.**
This is not just a philosophy for AI navigation. It's a philosophy for software development in 2026. It's a philosophy for building anything in a world where the ground shifts under your feet faster than any plan can account for.
---
## The Uncomfortable Questions
If "doing the thing is doing the thing," then some questions become unavoidable:
**How much of your planning is actually necessary?** Not all planning is bad. Some planning prevents catastrophic waste. But how much of your planning is just comfortable procrastination dressed up as professionalism?
**What would you ship if you had to ship today?** Not the polished version. Not the version with all the edge cases handled. The minimum thing that solves the core problem. Would it work? Would it be useful? Would it be worth iterating on?
**How long has it been since your last "doing the thing" moment?** The last time you started something without a plan, shipped something imperfect, learned something from a failure? If it's been too long, you might be stuck in the planning trap.
**Are your AI tools helping you do the thing, or helping you plan the thing?** AI can be a powerful accelerator for actual work. It can also be the most seductive planning tool ever created. Which way are you using it?
---
## The Last Line
Prakhar ends his post with a line that's both self-aware and defiant:
> "Writing a blog about doing the thing is not doing the thing."
> "I should probably get back to work."
It's a joke. It's also the most honest moment in the piece. The author knows exactly what he's doing — he's writing about action instead of taking action. And he names it. And then he stops writing and presumably goes back to work.
That self-awareness is itself instructive. The planning trap isn't something you escape permanently. It's something you recognize in yourself, name, and actively resist. Every time. Every day. Every project.
For AI navigation, the equivalent is: **every time the system pauses to analyze instead of act, that's a planning trap moment.** Every time it builds a comprehensive model before attempting a single navigation step, that's planning. Every time it explains what it's going to do before doing it, that's planning.
The system that does the thing — that navigates first, learns second, adjusts third — is the system that works. Not because it's smarter. Because it's faster. Because it treats imperfection as a feature, not a bug. Because it understands that in a dynamic world, action is the only form of knowledge that scales.
---
## Final Thought: The Permission We All Needed
Prakhar's post got 365 points because it gave developers permission to be imperfect. Permission to start before they're ready. Permission to ship things that aren't finished. Permission to learn from failure instead of trying to prevent it.
Voice AI navigation operates on that same permission. It starts before it has a complete map. It navigates before it understands everything. It fails and learns and tries again. Not because it's broken, but because that's the only way to navigate effectively in a world that changes faster than any model can capture.
The thing is not the plan. The thing is not the documentation. The thing is not the architecture diagram. The thing is not the requirements document.
The thing is the navigation that happens. The thing is the user who gets help. The thing is the page that gets navigated. The thing is the goal that gets reached.
**Doing the thing is doing the thing.**
And in 2026, with AI capable of doing things at unprecedented speed and scale, that simple insight might be the most important engineering philosophy of all.
---
*Keywords: software productivity, planning vs action, AI navigation, Voice AI, developer productivity, shipping fast, iteration, action bias, web navigation AI, doing vs planning*
*Word count: ~2,800 | Source: softwaredesign.ing/blog/doing-the-thing-is-doing-the-thing | HN: 365 points, 122 comments*
← Back to Blog
DEMOGOD