At its core, Orchestrator starts as a regular LLM chat interface. But with one crucial difference: this chat can spawn new chats.
Think of it like this:
Each chat stays focused on its specific task. No context pollution. No losing track of what we're doing.
Research shows that LLMs lose 39% of their performance in long conversations. They make assumptions, get confused, and can't recover from mistakes. By keeping each conversation short and focused, we maintain peak AI performance.
Main Chat: "Build a todo app"
├── Chat 1: "Design UI"
├── Chat 2: "Backend API"
└── Chat 3: "Frontend"The user manually navigates between chats. Simple, but effective.
[Todo App Project]
├── [✓] Design Phase
│ ├── [✓] User Stories
│ └── [✓] Mockups
├── [●] Development (active)
│ ├── [●] Backend API
│ └── [ ] Frontend
└── [ ] TestingNow you see all tasks at once. Click on any node to jump into that conversation. The tree shows what's done, what's active, and what's pending.
The AI automatically knows when to split tasks:
User: "Add authentication to my app"
AI: "I'll break this down into focused tasks:"
├── Research auth providers (auto)
├── Design auth flow (interactive)
├── Implement login (interactive)
├── Add session management (auto)
└── Write auth tests (auto)The system decides which tasks need your input and which can run automatically.
Some tasks run by themselves, others wait for you:
You can be coding the login component while the AI simultaneously researches auth providers and sets up test infrastructure.
A complete development environment where:
Every task can be:
Each task maintains its own context:
The system maintains a library of prompts that:
┌─────────────────────────┬───────────────────────┐
│ │ │
│ Task Tree View │ Active Chat/ │
│ │ Canvas Editor │
│ [Project Root] │ │
│ ├─[✓] Setup │ Current: Frontend │
│ ├─[●] Frontend ←──── │ │
│ │ ├─[●] Login │ AI: Let's implement │
│ │ └─[ ] Dashboard │ the login component. │
│ └─[ ] Deploy │ │
│ │ You: ... │
└─────────────────────────┴───────────────────────┘You are a task decomposition expert. Given a user request,
decide whether to:
1. Execute directly (if simple and atomic)
2. Decompose into subtasks (if complex)
For each subtask, specify:
- Clear objective
- Execution mode (interactive/autonomous)
- Dependencies
- Expected outputs
Request: [user input]User: "I need to add a shopping cart to my e-commerce site"
Orchestrator:
I'll help you add a shopping cart. Here's my plan:
[Shopping Cart Feature]
├── [Auto] Research best practices
├── [Interactive] Design cart UI
├── [Auto] Set up database schema
├── [Interactive] Implement cart logic
├── [Auto] Create API endpoints
├── [Interactive] Frontend integration
└── [Auto] Write tests
Shall I proceed? You can modify this plan or jump into any task.The user approves, and multiple tasks begin:
While the user works on UI design, the system has already completed research and is working on the API endpoints. By the time they're done with design, much of the groundwork is complete.
Orchestrator transforms how we work with AI by embracing a simple truth: focused conversations work better than long, meandering ones. By building a system where chats can create and manage other chats, we create a new paradigm for AI-assisted work - one that's more efficient, more reliable, and more transparent than anything before.