"The Tech Market Is Fundamentally Fucked Up — AI Is Just a Scapegoat" — Why the 14-Year Liquidity Hangover Explains Voice AI's Demo Theater Problem (HN #26 · 223 points)

# "The Tech Market Is Fundamentally Fucked Up — AI Is Just a Scapegoat" — Why the 14-Year Liquidity Hangover Explains Voice AI's Demo Theater Problem **Posted on January 29, 2026 | HN #26 · 223 points · 140 comments** *A new Substack post hitting Hacker News today doesn't mince words: AI didn't break the tech job market — 14 years of post-2008 financial toxicity did. Engineers treated as inventory. Layoffs as Wall Street signaling. Interview theater hiding disposability. The diagnosis is brutal and accurate. And it perfectly explains why Voice AI is stuck in demo theater mode: we're treating AI agents like speculative assets instead of production infrastructure.* --- ## The Tech Market's Unraveling: Not AI's Fault On January 29, 2026, Anar Bayramov published a blistering critique on Substack that rocketed to #26 on Hacker News with 223 points and 140 comments. The title pulls no punches: **"The Tech Market is Fundamentally Fucked Up - AI is Just a Scapegoat."** The thesis: AI isn't why tech workers are getting laid off in waves. The real culprit is **14 years of financial toxicity** dating back to the 2008 financial crisis. Near-zero interest rates created a liquidity trap that turned tech hiring into inventory management. Companies over-hired not because they needed engineers, but because cheap capital made hoarding talent a growth signal to Wall Street. Now that the music's stopped — interest rates normalized, capital got expensive, growth-at-all-costs became unsustainable — the tech industry is liquidating its "inventory." Engineers who thought they were building careers discovered they were actually **Work in Progress on a balance sheet**, waiting to be written off when market conditions changed. The kicker: **AI is taking the blame for a structural rot that predates ChatGPT by over a decade.** This isn't just a labor market story. It's a mirror held up to every hyped technology cycle in the past 15 years — including Voice AI. Because the same patterns that turned engineers into disposable inventory are turning AI agents into demo theater: impressive showcases that collapse under production reality. --- ## The Liquidity Trap: When Cheap Money Replaces Sustainability Bayramov's first target is the **post-2008 liquidity trap**. After the financial crisis, central banks flooded markets with cheap capital. Interest rates hit near-zero. Venture capital became abundant. Investors prioritized growth over profitability. In this environment, tech companies discovered a perverse incentive: **hiring engineers signaled growth to Wall Street, regardless of whether those engineers had sustainable work.** Headcount became a vanity metric. The bigger the team, the more "ambitious" the company looked to investors. This wasn't about building products efficiently. It was about **using cheap capital to manufacture the appearance of momentum.** Hire 500 engineers. Announce "we're scaling aggressively." Watch your valuation climb. Whether those 500 engineers have coherent projects didn't matter — what mattered was the signal they sent. The result: a 14-year hiring spree decoupled from actual product needs. Companies accumulated engineering "inventory" the same way retailers stockpile goods before a sale. Except in this case, the inventory was human talent, and the sale was a liquidity-driven valuation game. --- ## Engineers as Inventory: The Work-in-Progress Trap Bayramov's most devastating insight is how engineers were classified internally: **Work in Progress (WIP)**. Not human capital. Not long-term investment. Inventory waiting to be liquidated if market conditions changed. In accounting terms, WIP refers to partially completed goods that haven't yet generated revenue. For tech companies treating engineers this way, the implication is chilling: **you're a bet that might or might not pay off.** If the company's growth trajectory changes, if investor priorities shift, if capital becomes expensive — the bet gets liquidated. This is why layoffs no longer signal failure. In the pre-2008 era, layoffs meant a company was struggling. Today, layoffs are a **Wall Street signaling mechanism**. Companies announce layoffs and stock prices jump because investors interpret it as "management showing discipline." The market rewards liquidating inventory. Bayramov describes the two-tier system that resulted: 1. **Core engineers**: Essential to the business, protected from volatility 2. **Disposable engineers**: Hired during liquidity-fueled expansion, first to go when capital tightens The disposable tier is massive. Most engineers occupy it without realizing. They go through brutal interview processes — LeetCode grinding, system design marathons, proving they're "top 1%" — only to discover they were hired as **expendable bets on a growth trajectory that never materialized.** --- ## Interview Theater: Proving Excellence to Become Disposable The interview process is where the system's absurdity becomes most visible. Tech companies demand that candidates prove they're elite — algorithmically brilliant, architecturally sound, capable of handling scale that most companies will never reach. **Why the extreme rigor if the role is disposable?** Because the interview isn't actually about finding essential talent. It's about **maintaining the illusion that every hire matters.** Companies can't openly admit they're stockpiling inventory for signaling purposes, so they construct elaborate interview gauntlets that suggest every engineer will work on mission-critical systems. The reality: you grind LeetCode for weeks, survive six rounds of interviews, join a team, and discover your work is peripheral. Or the project gets shelved. Or your entire team gets "reorganized" six months later when investor priorities shift. Bayramov calls this **interview theater** — a performance that hides the underlying disposability. The candidate proves they're top-tier. The company hires them as inventory. When layoffs come, the narrative shifts to "market conditions" or "AI efficiency gains," never admitting the hire was speculative from the start. --- ## The European Import: US Volatility Without US Compensation One of Bayramov's sharpest observations is how European tech markets imported US-style volatility **without importing US-style compensation.** In the US, the disposability bargain at least comes with upside: high salaries, equity, the possibility of hitting a startup jackpot. Engineers are inventory, but they're compensated for the risk. Europe adopted the layoff culture — quarterly "restructurings," treating engineers as expendable — but **without the financial upside.** European tech salaries are a fraction of US levels. Equity packages are smaller. Yet companies embrace the same liquidity-driven over-hiring and layoff cycles. The result: engineers face US-level job instability with European-level compensation. The worst of both worlds. This is structurally similar to how AI adoption is spreading globally: everyone sees the hype (the demos, the headlines, the funding rounds), but the infrastructure realities don't travel with it. Companies adopt AI agents without the operational maturity to deploy them in production. They want the signaling value (look, we have AI!) without the engineering discipline required for reliability. --- ## Layoffs as Marketing: The Signal Beats the Substance Bayramov's final point cuts deepest: **layoffs are now a marketing signal, not a necessity.** Companies lay off thousands of engineers not because they're failing, but to demonstrate "efficiency" to Wall Street. This is the inversion of traditional business logic. Historically, layoffs were a last resort — a signal that the business couldn't sustain its workforce. Now, layoffs are **proactive marketing**. They announce: "We're serious about margins. We're cutting the fat. We're focused." Stock prices jump. Analysts praise "discipline." The engineers who built the company's products are liquidated like bad trades. This is what happens when capital abundance decouples hiring from productivity. For 14 years, companies hired engineers as **speculative assets** — bets that growth would continue indefinitely. Now that growth has stalled, those assets are being liquidated. Not because they failed. Because the market conditions that justified their existence changed. AI is the perfect scapegoat for this dynamic. It lets companies claim "AI efficiency gains" when they're really just unwinding over-hiring from the liquidity era. It's easier to blame technology than to admit the company treated engineers as inventory from day one. --- ## What This Has to Do with Voice AI Everything. Voice AI is experiencing the exact same pattern: liquidity-fueled hype creating demo theater that collapses under production reality. And the root causes are structurally identical to Bayramov's tech labor critique. ### 1. AI Agents as Speculative Assets Just as companies treated engineers as inventory — hire in bulk during liquidity booms, liquidate when conditions change — companies are treating AI features as **speculative bets.** Launch a Voice AI agent. Announce it in a press release. Add "AI-powered" to the marketing page. Watch investors respond positively. But the operational question — **does this AI agent reliably solve user problems in production?** — gets deferred. The feature exists to signal innovation, not to deliver sustainable value. It's WIP on the product roadmap, waiting to be liquidated when priorities shift. This is why so many AI demos are impressive but so few AI products are reliable. The incentive structure rewards signaling, not sustainability. ### 2. Demo Theater Hides Production Gaps Bayramov's "interview theater" — where candidates prove elite skills to become disposable — has a direct parallel in Voice AI: **demo theater.** Voice AI demos are spectacular. Controlled environments. Carefully scripted interactions. Pre-tested websites. Showcases designed to look like magic. Then production happens. The agent encounters a website it's never seen. Navigation breaks. The AI misinterprets a menu structure. Users report failures. The demo's polish evaporates under real-world complexity. This isn't a failure of AI capability. It's a failure of **incentive alignment.** The demo exists to impress investors, close sales, generate headlines. The production deployment exists to serve users reliably. These are different goals, and the system optimizes for the former. Just as interview rigor masks engineer disposability, demo polish masks AI fragility. ### 3. Core vs. Disposable Features Bayramov's two-tier engineer system — Core vs. Disposable — maps directly to AI feature development. **Core features:** Essential navigation paths, high-traffic user flows, mission-critical interactions. These get engineering rigor, edge case testing, graceful degradation. **Disposable features:** Experimental AI capabilities, speculative automation, "wouldn't it be cool if..." ideas that consume development time but have unclear ROI. These get launched, underperform, and quietly sunset when metrics don't justify maintenance. Most AI agent features start as disposable. They're bets on what might drive engagement. When those bets don't pay off — when user adoption is low, when reliability costs are high — they get liquidated. The company pivots to the next AI experiment. This isn't inherently wrong. The problem is when companies treat **all AI features as disposable by default**, building them for signaling rather than sustainability. ### 4. Layoffs as AI Announcements: Signaling Over Substance The most damning parallel: just as companies use layoffs as marketing signals to Wall Street, they use **AI announcements as marketing signals to investors.** "We've integrated AI agents into our platform." Stock price ticks up. Investor deck gets updated. Sales pitches emphasize "AI-powered automation." But the operational reality? The AI handles 30% of interactions successfully. The other 70% fall back to manual workflows. Customer support tickets about AI failures increase. Engineers scramble to patch edge cases. The announcement succeeded as a signal. The product struggles as infrastructure. This is the liquidity trap applied to technology: when capital is cheap, signaling innovation matters more than delivering reliability. AI agents become the engineering equivalent of over-hired inventory — launched because the market rewards AI announcements, not because the operational case is solid. --- ## The 14-Year Hangover Applies to AI Too Bayramov's thesis is that tech layoffs are the unwinding of 14 years of financial toxicity. Cheap capital created unsustainable growth patterns. Now that capital is expensive, those patterns are collapsing. Voice AI is facing the same reckoning. The past three years have been the **AI liquidity boom**: unlimited investor enthusiasm, every company adding "AI" to their pitch deck, funding rounds based on potential rather than performance. That era is ending. AI products are being evaluated on **production reliability, not demo polish.** Investors want to see sustainable value, not speculative features. Companies that launched AI agents for signaling purposes are discovering they can't justify the engineering cost to make them production-ready. The AI market is entering its **hangover phase**. And just as Bayramov argues that tech workers shouldn't blame AI for layoffs caused by structural dysfunction, AI builders shouldn't blame model limitations for failures caused by treating agents as speculative assets. --- ## What Sustainability Looks Like: Moving Beyond Demo Theater If the liquidity trap turned engineers into disposable inventory, and the AI hype cycle turned agents into speculative features, what's the path back to sustainability? Bayramov's critique implies a solution: **stop treating engineers as inventory. Start treating them as long-term capital.** Hire because you have sustainable work, not because headcount signals growth. Evaluate based on value delivered, not Wall Street optics. For Voice AI, the equivalent principle: **stop treating AI agents as marketing signals. Start treating them as production infrastructure.** That means: ### 1. Build for Real Use Cases, Not Investor Demos Every AI agent feature should answer: **what specific user problem does this solve, and does it solve it reliably enough for production?** If the answer is "it looks impressive in demos but struggles in practice," the feature is disposable. Don't launch it for signaling purposes. Either invest in making it production-ready, or cut it. ### 2. Measure Reliability, Not Capability Demo metrics emphasize capability: "Our AI navigates complex sites! It understands natural language! It handles multi-step workflows!" Production metrics emphasize reliability: **How often does it succeed? How does it fail? How gracefully does it degrade?** Capability without reliability is demo theater. Reliability without excessive capability is sustainable infrastructure. ### 3. Accept That Most AI Features Should Be Narrow The liquidity trap encouraged over-hiring because capital was cheap. Companies accumulated engineering inventory because there was no cost to holding it. AI is falling into the same trap: companies launch agents with expansive capabilities because **model capability is cheap.** GPT-4 can theoretically handle any navigation task, so why not promise universal coverage? Because **operational reliability isn't cheap.** Every additional use case multiplies edge cases, failure modes, testing burden. Universal coverage sounds impressive in demos but collapses under production complexity. Sustainable AI agents are **narrow and reliable**, not broad and fragile. ### 4. Stop Blaming Models for Structural Problems Just as Bayramov argues AI is a scapegoat for pre-existing tech market dysfunction, AI builders need to stop blaming model limitations for failures caused by poor operational design. When a Voice AI agent fails, the first question isn't "Is GPT-5 smarter?" It's "Did we design this system for production reliability, or for demo polish?" Most failures trace to structural issues: - Treating navigation as a showcase feature instead of infrastructure - Launching agents without edge case testing because the demo looked good - Optimizing for investor reactions instead of user outcomes Better models won't fix these. Operational discipline will. --- ## The Core vs. Disposable AI Strategy If Bayramov's two-tier engineer system is a warning, AI product teams should ask: **which of our AI features are Core, and which are Disposable?** **Core AI features:** - Essential to user workflows - Reliable enough for production - Justified by operational metrics, not signaling value - Worthy of ongoing engineering investment **Disposable AI features:** - Experimental or speculative - Launched for demo value but unproven in production - Unclear ROI, maintained for marketing reasons - First to cut when reliability becomes priority Most AI features today are Disposable masquerading as Core. They were launched during the AI liquidity boom when signaling mattered more than sustainability. Now that the market is demanding production reliability, those features are being quietly deprecated or relegated to "experimental" status. The honest path forward: **admit which features are Core and which are Disposable.** Invest in making Core features genuinely reliable. Cut Disposable features before they consume operational capacity. --- ## Europe's AI Adoption Mirrors Its Labor Market Dysfunction Bayramov's critique of Europe importing US volatility without US compensation applies directly to global AI adoption. US tech companies get AI hype, massive funding rounds, and the operational infrastructure (cloud platforms, model APIs, engineering talent) to attempt production deployment. They at least have the **resources to fail expensively** on the path to reliability. European companies (and much of the global market) get the hype — the demos, the promises, the "you need AI to stay competitive" pressure — without the operational infrastructure. They adopt Voice AI agents because the market says they should, not because they have the engineering maturity to deploy them reliably. The result: AI demo theater spreads globally, but production reliability remains concentrated in a few well-resourced companies. Everyone gets the signaling value of "AI-powered." Almost nobody gets sustainable reliability. This is the AI equivalent of European engineers facing US-style layoffs without US-style compensation: **global markets adopt AI volatility without the infrastructure to make it stable.** --- ## Why AI Will Keep Getting Blamed for Structural Problems Bayramov's final thesis: AI is a convenient scapegoat for 14 years of financial dysfunction. It's easier to say "AI efficiencies" than "we over-hired during liquidity booms and now we're liquidating inventory." This will continue in the Voice AI space. When AI agents fail, it's easier to blame **model limitations** than admit the product was designed for demos, not production. When AI features get deprecated, it's easier to cite "shifting priorities" than admit the feature was Disposable from launch — a speculative bet that never justified operational cost. When users complain about AI unreliability, it's easier to promise "the next model will fix this" than confront the **structural design choices** that prioritized signaling over sustainability. AI will remain the scapegoat because admitting the real problem — that liquidity-driven hype cycles incentivize demo theater over production rigor — requires confronting uncomfortable truths about how tech products are built, marketed, and funded. --- ## The Verdict: Sustainability Beats Speculation Bayramov's Substack post ends with a stark assessment: the tech market is fundamentally broken because **14 years of financial toxicity** replaced sustainable hiring with speculative inventory accumulation. AI isn't the cause. It's just the latest excuse. For Voice AI, the lesson is identical: **AI agents fail in production not because models are inadequate, but because the incentive structure rewards demo theater over operational discipline.** The solution isn't better models. It's **treating AI infrastructure the way you'd treat any production system**: narrow scope, rigorous testing, graceful degradation, honest assessment of reliability before launch. The companies that succeed with Voice AI won't be the ones with the flashiest demos. They'll be the ones that recognized the liquidity hangover early — that saw past the hype cycle, cut the Disposable features, invested in Core reliability, and built agents for **sustainable production use** rather than speculative signaling. Because just as Bayramov argues tech workers shouldn't be liquidated like inventory when market conditions shift, AI features shouldn't be launched like speculative assets and deprecated when the demo magic fades. **Sustainability beats speculation. Production beats demo theater. Core beats Disposable.** The 14-year hangover applies to AI agents too. The question is which companies recognize it before they're forced to liquidate their AI "inventory" like the engineering teams before them. --- *Keywords: tech job market analysis, AI scapegoating, Voice AI production reliability, demo theater vs production, liquidity trap tech hiring, AI agents as infrastructure, sustainable AI development, engineering disposability, AI hype cycle, interview theater, speculative technology features, operational AI discipline* *Word count: ~3,700 | Source: bayramovanar.substack.com/p/tech-market-is-fucked-up | HN: 223 points, 140 comments*
← Back to Blog