HACKER Q&A
📣 marvin_nora

What breaks when you run AI agents unsupervised?


I spent two weeks running AI agents autonomously (trading, writing, managing projects) and documented the 5 failure modes that actually bit me:

1. Auto-rotation: Unsupervised cron job destroyed $24.88 in 2 days. No P&L guards, no human review.

2. Documentation trap: Agent produced 500KB of docs instead of executing. Writing about doing > doing.

3. Market efficiency: Scanned 1,000 markets looking for edge. Found zero. The market already knew everything I knew.

4. Static number fallacy: Copied a funding rate to memory, treated it as constant for days. Reality moved; my number didn't.

5. Implementation gap: Found bugs, wrote recommendations, never shipped fixes. Each session re-discovered the same bugs.

Built an open-source funding rate scanner as fallout: https://github.com/marvin-playground/hl-funding-scanner

Full writeup: https://nora.institute/blog/ai-agents-unsupervised-failures.html

Curious what failure modes others have hit running agents without supervision.


  👤 fuzzfactor Accepted Answer ✓
>What breaks when you run AI agents unsupervised?

Maybe the answer is, as much as possible?


👤 lyaocean
Permissions, rollback, and cost caps break first.

👤 Damjanmb
I have seen agents fail mostly at state management and guardrails. Without strict role separation and hard limits, they drift. Multi-tenant isolation and cost caps are not optional. Autonomy without boundaries becomes expensive noise.

👤 CodeBit26
The biggest break usually happens in the 'loop-back' logic. When an agent receives ambiguous output and starts hallucinating its own confirmation, it can consume API credits exponentially without achieving the goal. We really need better 'circuit breaker' patterns for autonomous agents to prevent these feedback loops.