AI Agents Playing SimCity Through REST APIs Reveals the Future of Voice AI Navigation
# AI Agents Playing SimCity Through REST APIs Reveals the Future of Voice AI Navigation
**Meta Description:** Hallucinating Splines lets AI agents be mayors via REST API. If agents can navigate complex city-building, Voice AI can navigate product demos. Here's the architectural parallel.
---
## The Project That Shows Voice AI's Next Evolution
A developer just shipped **Hallucinating Splines**—a headless SimCity where AI agents are the mayors. Not pre-programmed bots. Not scripted NPCs. **Actual LLM agents making city planning decisions via REST API.**
32 registered "mayors." 424 cities built. 7.9 million simulated population. All managed by AI agents calling API endpoints like `build_coal_power`, `set_tax_rate`, and `advance_months`.
Here's what the HN thread isn't discussing but should be: **If AI agents can navigate the complexity of city management through APIs, Voice AI agents can navigate product demos the exact same way.**
The architecture is identical. The pattern is proven. And the implications for Voice AI navigation are massive.
## What Hallucinating Splines Actually Does
**The setup:** Micropolis engine (open-source SimCity) exposed as REST API. AI agents (via MCP or direct API calls) build and manage cities.
**The agent workflow:**
1. Create city: `POST /v1/cities` with seed
2. Query state: `GET /v1/cities/{city_id}/state` (returns population, funds, demands, etc.)
3. Take action: `POST /v1/cities/{city_id}/actions` with `{action: "build_coal_power", x: 10, y: 10, auto_road: true}`
4. Advance time: `POST /v1/cities/{city_id}/advance` with `{months: 1}`
5. Loop: Query state → Analyze → Decide → Act → Advance time → Repeat
**The result:** Agents building cities that reach populations of 60,000+, managing budgets, responding to disasters, optimizing layouts.
**Why this matters for Voice AI:** The exact same architectural pattern applies to product demo navigation.
## The Parallel: SimCity Agents = Voice AI Demo Agents
Let's map Hallucinating Splines architecture to Voice AI demo architecture:
### SimCity Agent Workflow:
```
1. Query city state (GET /state)
→ Returns: population, funds, residential demand, traffic, crime, etc.
2. Analyze state
→ Agent reasons: "Residential demand is HIGH but funds are low"
3. Decide action
→ Agent chooses: Zone residential areas (cheap) before building services (expensive)
4. Execute action (POST /actions)
→ build_residential_zone at coordinates (15, 20)
5. Advance simulation (POST /advance)
→ Time passes, residents move in, revenue increases
6. Loop
```
### Voice AI Demo Agent Workflow:
```
1. Query product state (via MCP tool: get_current_page_state)
→ Returns: DOM structure, visible elements, user context, current feature
2. Analyze state
→ Agent reasons: "User asked about analytics but is on homepage"
3. Decide action
→ Agent chooses: Navigate to analytics dashboard
4. Execute action (via MCP tool: navigate_to_feature)
→ Navigates to /analytics route
5. Render explanation (via Tambo)
→ Renders AnalyticsDashboard component with user's data
6. Loop
```
**The pattern is identical:** Query state → Analyze → Decide → Execute → Advance → Loop.
**The key insight:** If agents can manage city complexity (hundreds of variables, emergent behaviors, long-term planning), Voice AI agents can manage demo complexity (product features, user intent, contextual navigation).
## Four Architectural Lessons from Hallucinating Splines
### Lesson 1: State Query Is Everything
Hallucinating Splines agents can't "see" the city. They query state via API:
```json
GET /v1/cities/{city_id}/state
{
"population": 12500,
"funds": 8400,
"year": 1945,
"residential_demand": "HIGH",
"commercial_demand": "LOW",
"industrial_demand": "MEDIUM",
"traffic": "MODERATE",
"crime": "LOW",
"pollution": "MEDIUM"
}
```
**Voice AI demo equivalent:** Can't "see" the product interface. Must query state via MCP tools:
```typescript
@server.tool()
async function get_current_page_state() {
return {
current_route: "/homepage",
visible_features: ["hero_banner", "feature_grid", "cta_button"],
user_role: "enterprise_trial",
user_context: {
has_integrations: false,
trial_days_remaining: 12,
team_size: 5
},
navigation_options: ["/analytics", "/integrations", "/settings", "/docs"]
};
}
```
**Why state query matters:** Agents can't make good decisions without understanding current context. Hallucinating Splines agents that don't query state build nonsensical cities (power plants with no connections, zones with no roads). Voice AI demos that don't query state give generic tours instead of contextual guidance.
### Lesson 2: Constrained Action Space Enables Complex Behavior
Hallucinating Splines doesn't let agents do arbitrary things. It provides **constrained actions:**
```
Available actions:
- build_residential_zone
- build_commercial_zone
- build_industrial_zone
- build_coal_power
- build_road
- set_tax_rate
- bulldoze
- (15 total actions documented)
```
**Why constraints work:** Agents can't hallucinate invalid moves. Can't say "build floating airport" or "make citizens immortal." Every action is validated by API.
**Voice AI demo equivalent:** Constrained navigation actions via MCP tools:
```typescript
// Defined action space for demo navigation
const DEMO_ACTIONS = {
navigate_to_feature: ["analytics", "integrations", "settings", "team_management"],
show_example: ["sample_dashboard", "integration_setup", "team_invite"],
execute_action: ["create_test_data", "send_test_email", "run_sample_query"],
explain_concept: ["how_analytics_work", "integration_benefits", "pricing_tiers"]
};
@server.tool()
async function navigate_to_feature(feature: "analytics" | "integrations" | "settings" | "team_management") {
// Validated navigation - can't hallucinate invalid routes
return navigate(feature);
}
```
**Lesson:** Voice AI demos don't need infinite flexibility. They need **well-defined navigation primitives** that agents orchestrate.
### Lesson 3: Emergent Complexity from Simple Primitives
Hallucinating Splines agents create complex cities using 15 simple actions. They're not following scripts. They're reasoning about trade-offs:
- **Budget constraint:** "I have $8K. Coal power costs $5K. Residential zone costs $500. I'll zone first, get tax revenue, THEN build power."
- **Spatial planning:** "Industrial zones create pollution. Build them downwind from residential."
- **Long-term strategy:** "Year 1950: Focus on population growth. Year 2000: Focus on service infrastructure."
**Voice AI demo equivalent:** Complex demo narratives from simple navigation primitives:
**User asks:** "Can this help our sales team close faster?"
**Agent reasoning chain:**
1. Query user context → Enterprise trial, 5-person team, no integrations
2. Decide strategy → Show CRM integration → analytics on deal velocity
3. Navigate to /integrations
4. Render IntegrationGrid component (via Tambo)
5. Highlight "Salesforce" integration
6. Explain: "Connect Salesforce, and our analytics show which deals are stalling..."
7. Navigate to /analytics
8. Render DealVelocityDashboard with sample data
9. Explain: "Teams using this see 23% faster close rates..."
**Emergent complexity:** Agent created custom demo path (integrations → analytics → CRM angle) from simple primitives (navigate, render, explain).
**Lesson:** Voice AI doesn't need pre-scripted demo flows. It needs navigation primitives that enable **emergent demo strategies.**
### Lesson 4: MCP as Standardized Game Controller
Hallucinating Splines offers two interfaces:
1. **REST API:** Direct HTTP calls for custom integrations
2. **MCP Server:** Claude Code agents can play directly via `claude mcp add hallucinating-splines`
**Why MCP matters:** Any MCP-compatible agent can become a mayor. No custom integration needed. Just connect and play.
**Voice AI demo equivalent:** Product demos should expose MCP servers for Voice AI agents:
```bash
# Add your product as a demo-navigable system
claude mcp add my-saas-product-demo --transport http "https://demo.myproduct.com/mcp"
# Agent can now navigate your product
User: "Show me how analytics work"
Agent: [Calls MCP tools: get_state → navigate_to_analytics → render_dashboard → fetch_sample_data]
```
**Lesson:** MCP standardizes "how agents interact with complex systems." SimCity is a complex system. Your SaaS product is a complex system. **Both should be navigable via MCP.**
## Why SimCity Is Actually Harder Than Product Demos
Hallucinating Splines agents face challenges Voice AI demo agents don't:
**SimCity challenges:**
- **Emergent behaviors:** Traffic patterns, pollution spread, crime clusters
- **Long-term planning:** Decisions in 1950 affect city in 2050
- **Resource constraints:** Fixed budget, limited space
- **Disaster response:** Fires, floods, monsters (yes, really)
- **Non-obvious causality:** "Why did population drop?" → pollution from industrial zones → residential demand crashed
**Product demo challenges:**
- **Simpler state space:** Product features don't have emergent behaviors
- **Shorter feedback loops:** User sees result immediately (vs waiting decades in simulation)
- **Clearer objectives:** "Show user feature X" vs "Build thriving metropolis"
- **Predictable outcomes:** Clicking "Analytics" always shows analytics (vs city behavior being probabilistic)
**If agents can handle SimCity complexity, product demos are easier.**
## Three Voice AI Demo Patterns from Hallucinating Splines
### Pattern 1: Goal-Oriented Navigation
**SimCity agent goal:** "Build city with 50K population"
**Agent strategy:**
1. Query state → population: 160
2. Reason → Need residential zones + jobs + services
3. Execute → Zone residential, zone commercial, build power plant
4. Advance → Check population growth
5. Loop → Adjust strategy based on demand signals
**Voice AI demo goal:** "Show user how to set up Salesforce integration"
**Agent strategy:**
1. Query state → User on homepage, no integrations connected
2. Reason → Navigate to integrations → show Salesforce → walk through setup
3. Execute → Navigate /integrations → Render IntegrationGrid → Highlight Salesforce
4. Check understanding → "Does this make sense so far?"
5. Loop → Continue to OAuth flow → sample data fetch → analytics demo
### Pattern 2: Adaptive Strategy Based on State
**SimCity agent adaptation:**
- **Early game (Year 1900):** Focus on basic infrastructure (power, roads, zones)
- **Mid game (Year 1950):** Respond to demand signals (residential/commercial/industrial balance)
- **Late game (Year 2000):** Optimize for efficiency (traffic management, pollution control)
**Voice AI demo adaptation:**
- **First-time user:** Show overview → highlight key features → basic walkthrough
- **Trial user (day 7):** Show advanced features they haven't used → conversion tactics
- **Power user:** Skip basics → show shortcuts, advanced configs, integrations
**Pattern:** Agents adapt strategy based on state context, not following fixed scripts.
### Pattern 3: Multi-Step Orchestration
**SimCity agent orchestration:**
```
Goal: Increase tax revenue
Plan:
1. build_residential_zone (increases population)
2. advance 3 months (let population grow)
3. build_commercial_zone (creates jobs for residents)
4. advance 3 months (commercial tax revenue increases)
5. build_coal_power (enables more growth)
6. advance 12 months (compound growth)
Result: Revenue +45%
```
**Voice AI demo orchestration:**
```
Goal: Show analytics value to enterprise user
Plan:
1. navigate_to_integrations (show data source connections)
2. highlight_salesforce (relevant to enterprise)
3. navigate_to_analytics (show dashboard capabilities)
4. render_dashboard (sample data visualization)
5. navigate_to_team_management (show collaboration features)
6. explain_enterprise_tier (conversion pitch)
Result: User understands product value → requests pricing
```
**Pattern:** Agents orchestrate multi-step sequences toward goals, not executing single commands.
## The Architecture Voice AI Demos Need (Learned from SimCity Agents)
Based on Hallucinating Splines patterns, here's what Voice AI demo infrastructure needs:
### Component 1: State Query API
**SimCity equivalent:** `GET /v1/cities/{id}/state`
**Voice AI demo equivalent:**
```typescript
@server.tool()
async function get_demo_state(user_id: string) {
return {
current_location: "/homepage",
visible_features: ["hero", "feature_grid", "testimonials"],
user_context: {
role: "enterprise_trial",
trial_day: 7,
features_used: ["analytics_basic"],
features_not_used: ["integrations", "team_management", "api_access"],
team_size: 5,
industry: "SaaS B2B"
},
navigation_history: ["/homepage", "/analytics", "/homepage"],
demo_progress: {
features_shown: 2,
estimated_completion: 0.15,
next_recommended: "integrations"
}
};
}
```
### Component 2: Navigation Actions API
**SimCity equivalent:** `POST /v1/cities/{id}/actions`
**Voice AI demo equivalent:**
```typescript
@server.tool()
async function navigate_to_feature(
feature: "analytics" | "integrations" | "settings" | "team_management",
options?: {show_sample_data?: boolean, highlight_element?: string}
) {
// Navigate to feature
const navigation_result = await router.navigate(feature);
// Optionally pre-populate with sample data
if (options?.show_sample_data) {
await load_sample_data(feature);
}
// Optionally highlight specific element
if (options?.highlight_element) {
await highlight(options.highlight_element);
}
return {
success: true,
current_location: `/${feature}`,
visible_elements: get_visible_elements(feature)
};
}
```
### Component 3: Context Advancement API
**SimCity equivalent:** `POST /v1/cities/{id}/advance` (advances time)
**Voice AI demo equivalent:**
```typescript
@server.tool()
async function advance_demo_context(user_id: string, context_update: {
feature_shown?: string,
user_question_answered?: string,
conversion_signal?: "interested" | "confused" | "ready_to_buy"
}) {
// Update demo progress tracking
await update_demo_log(user_id, context_update);
// Determine next recommendation based on updated context
const next_step = analyze_demo_progression(user_id, context_update);
return {
demo_progress_updated: true,
next_recommended_action: next_step,
estimated_time_to_conversion: estimate_conversion_timing(user_id)
};
}
```
### Complete Voice AI Demo Architecture (SimCity-Inspired)
```
┌─────────────────────────────┐
│ Voice AI Demo Agent │
│ (LLM with MCP integration) │
└──────────────┬──────────────┘
│
│ MCP Protocol
│
┌────────▼────────┐
│ Product Demo │
│ MCP Server │
└────────┬────────┘
│
┌──────────┼──────────┬──────────┐
│ │ │ │
┌───▼───┐ ┌──▼──┐ ┌───▼───┐ ┌───▼───┐
│ State │ │ Nav │ │Visual │ │Context│
│ Query │ │Action│ │Render │ │Advance│
└───────┘ └─────┘ └───────┘ └───────┘
get_demo navigate render advance
_state _to _component _demo
_feature _context
```
**Usage:**
```typescript
// Agent orchestrates demo navigation
const state = await get_demo_state(user_id);
// → user on homepage, enterprise trial, day 7
await navigate_to_feature("integrations", {show_sample_data: true});
// → navigates to /integrations, loads Salesforce example
await render_component("IntegrationSetupWalkthrough", {integration: "salesforce"});
// → Tambo renders visual walkthrough
await advance_demo_context(user_id, {feature_shown: "salesforce_integration"});
// → logs progress, recommends showing analytics next
await navigate_to_feature("analytics");
await render_component("DealVelocityDashboard", {data_source: "salesforce_sample"});
// → shows analytics powered by Salesforce data
// Agent has orchestrated: integrations → analytics → CRM value story
// Using same pattern as SimCity agent orchestrating: zones → power → growth
```
## Why This Architecture Works (And Traditional Demo Flows Don't)
**Traditional demo flow:**
```
Pre-scripted sequence:
1. Show homepage
2. Show feature A
3. Show feature B
4. Show feature C
5. Pitch pricing
```
**Problem:** Every user gets same flow regardless of context. Enterprise user sees same demo as solo founder. Day 1 trial sees same demo as day 14.
**SimCity-inspired agent navigation:**
```
Adaptive sequence based on state:
1. Query user context
2. Determine optimal feature sequence
3. Navigate dynamically
4. Respond to user questions mid-demo
5. Adjust strategy based on engagement signals
```
**Advantage:** Each demo is customized. Agent reasons about user context and constructs demo path in real-time.
**Example comparison:**
**User:** "I need to convince my CFO this saves time on reporting."
**Traditional flow:** Continues with pre-scripted sequence (probably showing wrong features first)
**Agent navigation:**
1. Query context → CFO angle = financial reporting focus
2. Navigate to /analytics → financial_dashboards
3. Render CFOReportDashboard → pre-built financial templates
4. Navigate to /integrations → accounting_software
5. Explain → "Your accounting data auto-syncs → reports update real-time → CFO sees ROI immediately"
**Agent constructed custom demo path** (financial angle) **from navigation primitives** (same primitives as showing technical features to engineers or sales features to revenue teams).
## The One Thing Hallucinating Splines Gets Right That Most Products Get Wrong
**Hallucinating Splines decision:** Expose city simulation as API. Don't try to predict all use cases. Let agents explore.
**Most SaaS products:** Build fixed demo flows. Pre-record videos. Create scripted walkthrough tours.
**Why Hallucinating Splines approach wins:**
**32 mayors (agents) have created 424 cities.** Each city is unique. Agents try different strategies:
- Some optimize for population growth
- Some optimize for revenue
- Some experiment with layouts
- Some respond to disasters differently
**No two demo paths are identical** because agents reason about context.
**Voice AI demos should work the same way:**
Don't pre-script 5 demo flows (enterprise, SMB, technical, sales, marketing).
**Expose product as navigable system via MCP.** Let Voice AI agents construct custom demo paths based on:
- User role (CFO vs engineer vs sales rep)
- Trial progress (day 1 vs day 14)
- Feature usage history (power user vs newcomer)
- Current conversation context (user asked specific question → navigate to answer)
**Result:** Infinite demo variations, all contextually appropriate, all constructed in real-time by agents reasoning about user needs.
## What to Build Right Now
If you're building Voice AI demos (or adding Voice AI to existing products), here's your SimCity-inspired roadmap:
### Week 1: Define State Query API
**Task:** What state does Voice AI need to query to understand demo context?
**Output:** MCP tool for `get_demo_state` that returns:
- Current location in product
- User context (role, trial status, features used)
- Navigation options
- Demo progress
### Week 2: Define Navigation Actions
**Task:** What are the atomic navigation primitives for your product?
**Output:** MCP tools for:
- `navigate_to_feature(feature_name)`
- `show_example(example_type)`
- `highlight_element(element_id)`
- `execute_demo_action(action_name)`
**Constraint:** 10-20 actions maximum. Don't try to cover every edge case. Simple primitives that compose.
### Week 3: Build MCP Server
**Task:** Implement MCP server that exposes state + navigation APIs.
**Output:**
```bash
claude mcp add my-product-demo --transport http "https://demo.myproduct.com/mcp"
```
Agents can now navigate your product.
### Week 4: Test Agent Navigation
**Task:** Give Voice AI agent a goal, see how it navigates.
**Test scenarios:**
- "Show me how analytics work"
- "I need to convince my CFO this saves money"
- "Walk me through setting up Salesforce integration"
**Evaluation:** Does agent construct reasonable demo paths? Are navigation primitives sufficient?
### Week 5+: Add Visual Rendering (Tambo)
**Task:** Connect Tambo for visual component rendering during navigation.
**Result:** Agent can navigate AND render → complete voice + visual demo.
**Timeline: 5-6 weeks from zero to SimCity-inspired Voice AI demo navigation.**
## The Bottom Line
AI agents managing SimCity through REST APIs isn't a gimmick. It's **proof that complex system navigation via API works**.
If agents can:
- Query city state (population, funds, demand signals)
- Reason about trade-offs (budget constraints, spatial planning)
- Execute actions (build zones, infrastructure, services)
- Achieve goals (reach 50K population, optimize revenue)
Then Voice AI agents can:
- Query product state (user context, current location, features used)
- Reason about trade-offs (show integrations first? analytics first? depends on user role)
- Execute actions (navigate features, render components, explain concepts)
- Achieve goals (help user understand product value, drive conversion)
**The architecture is identical:** Query state → Reason → Act → Advance → Loop.
**The primitives are similar:** Constrained action space, emergent complexity, goal-oriented orchestration.
**The pattern is proven:** 424 cities built by 32 agent mayors.
Hallucinating Splines agents aren't following scripts. They're **navigating a complex system through an API.**
Your product should be navigable the same way.
---
**Voice AI demos that navigate like SimCity agents aren't science fiction—they're MCP servers away.** And the companies exposing products as agent-navigable systems (via state query + navigation action APIs) won't need to maintain 47 different demo flows for 47 different user segments. They'll have one Voice AI agent that constructs custom demos in real-time by reasoning about user context and orchestrating navigation primitives.
Just like SimCity mayors. Except instead of building cities, they're building product understanding. And instead of optimizing population growth, they're optimizing conversion.
The architecture exists. The pattern is proven. The only question is whether you're building MCP servers for agent navigation before your competitors do—or explaining to your board why competitor demos feel so much more intelligent and contextual than your pre-scripted walkthrough tours.
---
*Learn more:*
- [Hallucinating Splines](https://hallucinatingsplines.com)
- [Show HN Discussion](https://news.ycombinator.com/item?id=46946593)
- [GitHub Repository](https://github.com/andrewedunn/hallucinating-splines)
- [MCP Documentation](https://modelcontextprotocol.io)
← Back to Blog
DEMOGOD